Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-arduino-based-follow-me-drone
Vijin Boricha
23 May 2018
9 min read
Save for later

How to build an Arduino based 'follow me' drone

Vijin Boricha
23 May 2018
9 min read
In this tutorial, we will learn how to train the drone to do something or give the drone artificial intelligence by coding from scratch. There are several ways to build Follow Me-type drones. We will learn easy and quick ways in this article. Before going any further, let's learn the basics of a Follow Me drone. This is a book excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. If you are a hardcore programmer and hardware enthusiast, you can build an Arduino drone, and make it a Follow Me drone by enabling a few extra features. For this section, you will need the following things: Motors ESCs Battery Propellers Radio-controller Arduino Nano HC-05 Bluetooth module GPS MPU6050 or GY-86 gyroscope. Some wires Connections are simple: You need to connect the motors to the ESCs, and ESCs to the battery. You can use a four-way connector (power distribution board) for this, like in the following diagram: Now, connect the radio to the Arduino Nano with the following pin configuration: Arduino pin Radio pin D3 CH1 D5 CH2 D2 CH3 D4 CH4 D12 CH5 D6 CH6 Now, connect the Gyroscope to the Arduino Nano with the following configuration: Arduino pin Gyroscope pin 5V 5V GND GND A4 SDA A5 SCL You are left with the four wires of the ESC signals; let's connect them to the Arduino Nano now, as shown in the following configuration: Arduino pin Motor signal pin D7 Motor 1 D8 Motor 2 D9 Motor 3 D10 Motor 4 Our connection is almost complete. Now we need to power the Arduino Nano and the ESCs. Before doing that, making common the ground means connecting both the wired to the ground. Before going any further, we need to upload the code to the brain of our drone, which is the Arduino Nano. The code is a little bit big. I am going to explain the code after installing the necessary library. You will need a library installed to the Arduino library folder before going to the programming part. The library's name is PinChangeInt. Install the library and write the code for the drone. The full code can be found at Github. Let's explain the code a little bit. In the code, you will find lots of functions with calculations. For our gyroscope, we needed to define all the axes, sensor data, pin configuration, temperature synchronization data, I2C data, and so on. In the following function, we have declared two structures for the accel and gyroscope data with all the directions: typedef union accel_t_gyro_union { struct { uint8_t x_accel_h; uint8_t x_accel_l; uint8_t y_accel_h; uint8_t y_accel_l; uint8_t z_accel_h; uint8_t z_accel_l; uint8_t t_h; uint8_t t_l; uint8_t x_gyro_h; uint8_t x_gyro_l; uint8_t y_gyro_h; uint8_t y_gyro_l; uint8_t z_gyro_h; uint8_t z_gyro_l; } reg; struct { int x_accel; int y_accel; int z_accel; int temperature; int x_gyro; int y_gyro; int z_gyro; } value; }; In the void setup() function of our code, we have declared the pins we have connected to the motors: myservoT.attach(7); //7-TOP myservoR.attach(8); //8-Right myservoB.attach(9); //9 - BACK myservoL.attach(10); //10 LEFT We also called our test_gyr_acc() and test_radio_reciev() functions, for testing the gyroscope and receiving data from the remote respectively. In our test_gyr_acc() function. In our test_gyr_acc() function, we have checked if it can detect our gyroscope sensor or not and set a condition if there is an error to get gyroscope data then to set our pin 13 high to get a signal: void test_gyr_acc() { error = MPU6050_read (MPU6050_WHO_AM_I, &c, 1); if (error != 0) { while (true) { digitalWrite(13, HIGH); delay(300); digitalWrite(13, LOW); delay(300); } } } We need to calibrate our gyroscope after testing if it connected. To do that, we need the help of mathematics. We will multiply both the rad_tilt_TB and rad_tilt_LR by 2.4 and add it to our x_a and y_a respectively. then we need to do some more calculations to get correct x_adder and the y_adder: void stabilize() { P_x = (x_a + rad_tilt_LR) * 2.4; P_y = (y_a + rad_tilt_TB) * 2.4; I_x = I_x + (x_a + rad_tilt_LR) * dt_ * 3.7; I_y = I_y + (y_a + rad_tilt_TB) * dt_ * 3.7; D_x = x_vel * 0.7; D_y = y_vel * 0.7; P_z = (z_ang + wanted_z_ang) * 2.0; I_z = I_z + (z_ang + wanted_z_ang) * dt_ * 0.8; D_z = z_vel * 0.3; if (P_z > 160) { P_z = 160; } if (P_z < -160) { P_z = -160; } if (I_x > 30) { I_x = 30; } if (I_x < -30) { I_x = -30; } if (I_y > 30) { I_y = 30; } if (I_y < -30) { I_y = -30; } if (I_z > 30) { I_z = 30; } if (I_z < -30) { I_z = -30; } x_adder = P_x + I_x + D_x; y_adder = P_y + I_y + D_y; } We then checked that our ESCs are connected properly with the escRead() function. We also called elevatorRead() and aileronRead() to configure our drone's elevator and the aileron. We called test_radio_reciev() to test if the radio we have connected is working, then we called check_radio_signal() to check if the signal is working. We called all the stated functions from the void loop() function of our Arduino code. In the void loop() function, we also needed to configure the power distribution of the system. We added a condition, like the following: if(main_power > 750) { stabilize(); } else { zero_on_zero_throttle(); } We also set a boundary; if main_power is greater than 750 (which is a stabling value for our case), then we stabilize the system or we call zero_on_zero_throttle(), which initializes all the values of all the directions. After uploading this, you can control your drone by sending signals from your remote control. Now, to make it a Follow Me drone, you need to connect a Bluetooth module or a GPS. You can connect your smartphone to the drone by using a Bluetooth module (HC-05 preferred) or another Bluetooth module as master-slave usage. And, of course, to make the drone follow you, you need the GPS. So, let's connect them to our drone. To connect the Bluetooth module, follow the following configuration: Arduino pin Bluetooth module pin TX RX RX TX 5V 5V GND GND See the following diagram for clarification: For the GPS, connect it as shown in the following configuration: Arduino pin GPS pin D11 TX D12 RX GND GND 5V 5V See the following diagram for clarification: Since all the sensors usages 5V power, I would recommend using an external 5V power supply for better communication, especially for the GPS. If we use the Bluetooth module, we need to make the drone's module the slave module and the other module the master module. To do that, you can set a pin mode for the master and then set the baud rate to at least 38,400, which is the minimum operating baud rate for the Bluetooth module. Then, we need to check if one module can hear the other module. For that, we can write our void loop() function as follows: if(Serial.available() > 0) { state = Serial.read(); } if (state == '0') { digitalWrite(Pin, LOW); state = 0; } else if (state == '1') { digitalWrite(Pin, HIGH); state = 0; } And do the opposite for the other module, connecting it to another Arduino. Remember, you only need to send and receive signals, so refrain from using other utilities of the Bluetooth module, for power consumption and swiftness. If we use the GPS, we need to calibrate the compass and make it able to communicate with another GPS module. We need to read the long value from the I2C, as follows: float readLongFromI2C() { unsigned long tmp = 0; for (int i = 0; i < 4; i++) { unsigned long tmp2 = Wire.read(); tmp |= tmp2 << (i*8); } return tmp; } float readFloatFromI2C() { float f = 0; byte* p = (byte*)&f; for (int i = 0; i < 4; i++) p[i] = Wire.read(); return f; } Then, we have to get the geo distance, as follows, where DEGTORAD is a variable that changes degree to radian: float geoDistance(struct geoloc &a, struct geoloc &b) { const float R = 6371000; // Earth radius float p1 = a.lat * DEGTORAD; float p2 = b.lat * DEGTORAD; float dp = (b.lat-a.lat) * DEGTORAD; float dl = (b.lon-a.lon) * DEGTORAD; float x = sin(dp/2) * sin(dp/2) + cos(p1) * cos(p2) * sin(dl/2) * sin(dl/2); float y = 2 * atan2(sqrt(x), sqrt(1-x)); return R * y; } We also need to write a function for the Geo bearing, where lat and lon are latitude and longitude respectively, gained from the raw data of the GPS sensor: float geoBearing(struct geoloc &a, struct geoloc &b) { float y = sin(b.lon-a.lon) * cos(b.lat); float x = cos(a.lat)*sin(b.lat) - sin(a.lat)*cos(b.lat)*cos(b.lon-a.lon); return atan2(y, x) * RADTODEG; } You can also use a mobile app to communicate with the GPS and make the drone move with you. Then the process is simple. Connect the GPS to your drone and get the TX and RX data from the Arduino and spread it through the radio and receive it through the telemetry, and then use the GPS from the phone with DroidPlanner or Tower. You also need to add a few lines in the main code to calibrate the compass. You can see the previous calibration code. The calibration of the compass varies from location to location. So, I would suggest you use the try-error method. In the following section, I will discuss how you can use an ESP8266 to make a GPS tracker that can be used with your drone. We learned to build a Follow Me-type drone and also used DroidPlanner 2 and Tower to configure it. Know more about using a smartphone to enable the follow me feature of ArduPilot and GPS tracker using ESP8266 from this book Building Smart Drones with ESP8266 and Arduino. Read More
Read more
  • 1
  • 0
  • 36561

article-image-is-web-development-dying
Richard Gall
23 May 2018
7 min read
Save for later

Is web development dying?

Richard Gall
23 May 2018
7 min read
It's not hard to find people asking whether web development is dying. A quick search throws up questions on Quora, Reddit, and other forums. "Is web development a dying profession or does it just smell funny?" asks one Reddit user. The usual suspects in the world of content (Forbes et al) have responded with their own takes and think pieces on whether web development is dead. And why wouldn't they? I, for one, would never miss out on an opportunity to write something with an outlandish and provocative headline for clicks. So, is web development dying or simply very unwell? Why do people think web development is dying? The question might seem a bit overwrought, but there are good reasons for people to ask the question. One reason is that getting a website has never been easier or cheaper. Think about it: if you want to create a content site, it doesn't take much to set one up with WordPress. You barely need to be technically literate, let alone a developer. Similarly, if you want an eCommerce store there are plenty of off-the-shelf solutions that allow people to start running an online business with very little work at all. Even if you do want a custom solution, you can now do that pretty cheaply. On the Treehouse forums, one user comments that thanks to sites like SquareSpace, businesses can now purchase a website for less than £100 (about $135). The commenter remarks that whereas he'd typically charge around £3000 for a complete website build, potential clients are coming back puzzled as to why he would think they'd spend so much when they could get the same result for a fraction of the price. From a professional perspective, this sort of anecdotal evidence indicates that it's becoming more and more difficult to be successful in web development. For all the talk around 'learning to code' and the digital economy, maybe building websites isn't the best area to get into. Web development is getting easier When people say web development is dying, they might actually be saying that there isn't as much money in it any more. If freelancers are struggling to charge the rates that they used to, that's because there is someone out there who is going to do it for a lot less money. The reason for this isn't that there's a new generation of web developers able to subsist on a paltry sum of money. It's actually getting a lot easier. Aside from solutions like WordPress and Shopify, the task of building websites from scratch (sort of scratch) is now easier than it has ever been. Are templates killing web development? Templates make everything easy for web developers and designers. Why would you want to do much more than drag and drop templates if you could? If the result looks good and does the job, then why spend time doing more? The more you do yourself, the more you're likely to break things. And the more you break things the more you've got to fix. Of course, templates are lowering the barrier to entry into web development and design. And while we shouldn't be precious about new web developers entering the industry, it is understandable that many experienced web developers are anxious about what the future might hold. From this perspective, templates aren't killing web development, but they are changing what the profession looks like. And without wishing to sound euphemistic, this is both a challenge and an opportunity for everyone in web development. Whether you're experienced or new to the industry, these changes mean people are going to have to adapt. Web development isn't dying, it's fragmenting The way web developers are going to have to adapt is by choosing what path they want to take in their career. Web development as we've always known it is, perhaps well and truly dead. Instead, it's fragmenting into specialized areas; design on the one hand, and full-stack on the other. This means your skill set needs to be unique. In a world where building websites takes very little skill or technical knowledge, specific expertise is vital. This is something journalist Andrew Pierno noted in a blog post on Medium. Pierno writes:  ...we are in a scenario where the web developer no longer has the skill set to build that interesting differentiator anymore, particularly if the main value prop is around A.I, computer vision, machine learning, AR, VR, blockchain, etc. Building websites is no longer remarkable - as we've seen, people that can do it are ubiquitous. But building a native application; that's not quite so easy. Building a mobile app that uses computer vision to compare you to Renaissance paintings - that's even harder to do. These are the sorts of things that are going to be valuable - and these are the sorts of things that web developers are going to need to learn how to do. Full-stack development and the expansion of the developer skill set In his piece, Pierno argues that the scope of the web developers role is shrinking. However, I don't think that's quite right. Yes, it might be fragmenting, but the scope of, say, full-stack development, is huge. In fact, full-stack developers need to know a huge range of technologies and tools. If they're to differentiate themselves in the job market, as Pierno suggests they should, they need to know machine learning, they need to know mobile, databases, and maybe even Blockchain. From this perspective, it's not hard to see how the 'web' part of web development might be dying. To some extent, as the web becomes more ubiquitous and less of a rarefied 'space' in people's lives, the more we have to get into the detail of how we utilize the technologies around it. Web development's decline is design's gain If web development as a discipline is dying, that's only going to make design more important. If, as we saw earlier, building websites is going to become a free for all for just about anyone with an internet connection and enough confidence, standards and quality might start to slip. That means the value of someone who understands good design will be higher than ever. As a web developer you might disappear into the ether of everyone else out there. But if you market yourself as a designer, someone who understands the intricacies of UI and UX implicitly, you immediately start to look a little different. Think of it like a sandwich shop - anyone can start making sandwiches. But to make a great sandwich shop, the type that wins awards and the type that people want to Instagram, requires extra attention to detail. It demands more skill and more culinary awareness. Maybe web development is dying, but maybe it just needs to change Clearly, what we call web development is very different in 2018 than what it was 5 years ago. There are a huge number of reasons for this, but perhaps the most important is that it doesn't really make sense to talk about 'the web' any more. Because 'the web' is now an outdated concept, perhaps web development needs to die. Maybe we're holding on to something which is only going to play into the hands of poor design and poor quality software. It's might even damage the careers of talented engineers and designers. You could make a pretty good comparison between 'the web' and 'big data'. Even reading those words feels oddly outdated today, but they're still at the center of the tech landscape. Big data, for example, is everywhere - it's hard to imagine our lives outside of it, but it doesn't make sense to talk about it in the abstract. Instead, what's interesting is how it's applied, how engineers make data accessible, usable and secure. The same is true of the web. It's not dead, but it has certainly assumed a slightly different form. And web development might well be dying, but the world will always need developers and designers. It's simply time to adapt. Read next Why is everyone talking about JavaScript fatigue? Is novelty ruining web development?
Read more
  • 0
  • 4
  • 51771

article-image-unity-variables-script-unity-2017-games
Amarabha Banerjee
23 May 2018
12 min read
Save for later

Working with Unity Variables to script powerful Unity 2017 games

Amarabha Banerjee
23 May 2018
12 min read
In this tutorial, you will learn how to work with the different variables available with the Unity 2017 platform. We will show you how to use these variables through use cases in order to script powerful Unity games. This article is an excerpt from the book, Learning C# by Developing Games with Unity 2017 written by Micael DaGraca, Greg Lukosek. The most popular game engine of our generation i.e. Unity is a preferred choice among game developers. Due to the flexibility it provides to code and script a game in C#. To understand and leverage the power of C# in your games, it is utterly necessary to get a proper understanding of how C# coding works. We are going to show you exactly that in the section given below. Writing C# statements properly When you do normal writing, it's in the form of a sentence, with a period used to end the sentence. When you write a line of code, it's called a statement, with a semicolon used to end the statement. This is necessary because the console reads the code one line at a time and it's necessary to use a semicolon to tell the console that the line of code is over and that the console can jump to the next line. (This is happening so fast that it looks like the computer is reading all of them at the same time, but it isn't.) When we start learning how to code, forgetting about this detail is very common, so don't forget to check for this error if the code isn't working: The code for a C# statement does not have to be on a single line as shown in the following example: public int number1 = 2; The statement can be on several lines. Whitespace and carriage returns are ignored, so, if you really want to, you can write it as follows: public int number1 = 2; However, I do not recommend writing your code like this because it's terrible to read code that is formatted like the preceding code. Nevertheless, there will be times when you'll have to write long statements that are longer than one line. Unity won't care. It just needs to see the semicolon at the end. Understanding component properties in Unity's Inspector GameObjects have some components that make them behave in a certain way. For instance, select Main Camera and look at the Inspector panel. One of the components is the camera. Without that component, it will cease being a camera. It would still be a GameObject in your scene, just no longer a functioning camera. Variables become component properties When we refer to components, we are basically referring to the available functions of a GameObject, for example, the human body has many functions, such as talking, moving, and observing. Now let's say that we want the human body to move faster. What is the function linked to that action? Movement. So in order to make our body move faster, we would need to create a script that had access to the movement component and then we would use that to make the body move faster. Just like in real life, different GameObjects can also have different components, for example, the camera component can only be accessed from a camera. There are plenty of components that already exist that were created by Unity's programmers, but we can also write our own components. This means that all the properties that we see in Inspector are just variables of some type. They simply store data that will be used by some method. Unity changes script and variable names slightly When we create a script, one of the first things that we need to do is give a name to the script and it's always good practice to use a name that identifies the content of the script. For example, if we are creating a script that is used to control the player movement, ideally that would be the name of the script. The best practice is to write playerMovement, where the first word is uncapitalized and the second one is capitalized. This is the standard way Unity developers name scripts and variables. Now let's say that we created a script named playerMovement. After assigning that script to a GameObject, we'll see that in the Inspector panel we would see that Unity adds a space to separate the words of the name, Player Movement. Unity does this modification to variable names too where, for example, a variable named number1 is shown as Number 1 and number2 as Number 2. Unity capitalizes the first letter as well. These changes improve readability in Inspector. Changing a property's value in the Inspector panel There are two situations where you can modify a property value: During the Play mode During the development stage (not in the Play mode) When you are in the Play mode, you will see that your changes take effect immediately in real time. This is great when you're experimenting and want to see the results. Write down any changes that you want to keep because when you stop the Play mode, any changes you made will be lost. When you are in the Development mode, changes that you make to the property values will be saved by Unity. This means that if you quit Unity and start it again, the changes will be retained. Of course, you won't see the effect of your changes until you click Play. The changes that you make to the property values in the Inspector panel do not modify your script. The only way your script can be changed is by you editing it in the script editor (MonoDevelop). The values shown in the Inspector panel override any values you might have assigned in your script. If you want to undo the changes you've made in the Inspector panel, you can reset the values to the default values assigned in your script. Click on the cog icon (the gear) on the far right of the component script, and then select Reset, as shown in the following screenshot: Displaying public variables in the Inspector panel You might still be wondering what the word public at the beginning of a variable statement means: public int number1 = 2; We mentioned it before. It means that the variable will be visible and accessible. It will be visible as a property in the Inspector panel so that you can manipulate the value stored in the variable. The word also means that it can be accessed from other scripts using the dot syntax. Private variables Not all variables need to be public. If there's no need for a variable to be changed in the Inspector panel or be accessed from other scripts, it doesn't make sense to clutter the Inspector panel with needless properties. In the LearningScript, perform the following steps: Change line 6 to this: private int number1 = 2; Then change line 7 to the following: int number2 = 9; Save the file In Unity, select Main Camera You will notice in the Inspector panel that both properties, Number 1 and Number 2, are gone: Line 6: private int number1 = 2; The preceding line explicitly states that the number1 variable has to be private. Therefore, the variable is no longer a property in the Inspector panel. It is now a private variable for storing data: Line 7: int number2 = 9; The number2 variable is no longer visible as a property either, but you didn't specify it as private. If you don't explicitly state whether a variable will be public or private, by default, the variable will implicitly be private in C#. It is good coding practice to explicitly state whether a variable will be public or private. So now, when you click Play, the script works exactly as it did before. You just can't manipulate the values manually in the Inspector panel anymore. Naming Unity variables properly As we explored previously, naming a script or variable is a very important step. It won't change the way that the code runs, but it will help us to stay organized and, by using best practices, we are avoiding errors and saving time trying to find the piece of code that isn't working. Always use meaningful names to store your variables. If you don't do that, six months down the line, you will be lost. I'm going to exaggerate here a bit to make a point. Let's say you will name a variable as shown in this code: public bool areRoadConditionsPerfect = true; That's a descriptive name. In other words, you know what it means by just reading the variable. So 10 years from now, when you look at that name, you'll know exactly what I meant in the previous comment. Now suppose that instead of areRoadConditionsPerfect, you had named this variable as shown in the following code: public bool perfect = true; Sure, you know what perfect is, but would you know that it refers to perfect road conditions? I know that right now you'll understand it because you just wrote it, but six months down the line, after writing hundreds of other scripts for all sorts of different projects, you'll look at this word and wonder what you meant. You'll have to read several lines of code you wrote to try to figure it out. You may look at the code and wonder who in their right mind would write such terrible code. So, take your time to write descriptive code that even a stranger can look at and know what you mean. Believe me, in six months or probably less time, you will be that stranger. Using meaningful names for variables and methods is helpful not only for you but also for any other game developer who will be reading your code. Whether or not you work in a team, you should always write easy-to-read code. Beginning variable names with lowercase You should begin a variable name with a lowercase letter because it helps distinguish between a class name and a variable name in your code. There are some other guides in the C# documentation as well, but we don't need to worry about them at this stage. Component names (class names) begin with an uppercase letter. For example, it's easy to know that Transform is a class and transform is a variable. There are, of course, exceptions to this general rule, and every programmer has a preferred way of using lowercase, uppercase, and perhaps an underscore to begin a variable name. In the end, you will have to decide upon a naming convention that you like. If you read the Unity forums, you will notice that there are some heated debates on naming variables. In this book, I will show you my preferred way, but you can use whatever is more comfortable for you. Using multiword variable names Let's use the same example again, as follows: public bool areRoadConditionsPerfect = true; You can see that the variable name is actually four words squeezed together. Since variable names can be only one word, begin the first word with a lowercase and then just capitalize the first letter of every additional word. This greatly helps create descriptive names that the viewer is still able to read. There's a term for this, it's called camel casing. I have already mentioned that for public variables, Unity's Inspector will separate each word and capitalize the first word. Go ahead! Add the previous statement to the LearningScript and see what Unity does with it in the Inspector panel. Declaring a variable and its type Every variable that we want to use in a script must be declared in a statement. What does that mean? Well, before Unity can use a variable, we have to tell Unity about it first. Okay then, what are we supposed to tell Unity about the variable? There are only three absolute requirements to declare a variable and they are as follows: We have to specify the type of data that a variable can store We have to provide a name for the variable We have to end the declaration statement with a semicolon The following is the syntax we use to declare a variable: typeOfData nameOfTheVariable; Let's use one of the LearningScript variables as an example; the following is how we declare a variable with the bare minimum requirements: int number1; This is what we have: Requirement #1 is the type of data that number1 can store, which in this case is an int, meaning an integer Requirement #2 is a name, which is number1 Requirement #3 is the semicolon at the end The second requirement of naming a variable has already been discussed. The third requirement of ending a statement with a semicolon has also been discussed. The first requirement of specifying the type of data will be covered next. The following is what we know about this bare minimum declaration as far as Unity is concerned: There's no public modifier, which means it's private by default It won't appear in the Inspector panel or be accessible from other scripts The value stored in number1 defaults to zero We discussed working with the Unity 2017 variables and how you can start working with them to create fun-filled games effectively. If you liked this article, be sure to go through the book Learning C# by Developing games with Unity 2017 to create exciting games with C# and Unity 2017. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 31671

article-image-restful-java-web-services-swagger
Fatema Patrawala
22 May 2018
14 min read
Save for later

Documenting RESTful Java Web Services using Swagger

Fatema Patrawala
22 May 2018
14 min read
In the earlier days, many software solution providers did not really pay attention to documenting their RESTful web APIs. However, many API vendors soon realized the need for a good API documentation solution. Today, you will find a variety of approaches to documenting RESTful web APIs. There are some popular solutions available today for describing, producing, consuming, and visualizing RESTful web services. In this tutorial, we will explore Swagger which offers a specification and a complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. The Swagger framework works with many popular programming languages, such as Java, Scala, Clojure, Groovy, JavaScript, and .NET. This tutorial is an extract taken from the book RESTFul Java Web Services - Third Edition, written by Bogunuva Mohanram Balachandar. This book will help you master core REST concepts and create RESTful web services in Java. A glance at the market adoption of Swagger The greatest strength of Swagger is its powerful API platform, which satisfies the client, documentation, and server needs. The Swagger UI framework serves as the documentation and testing utility. Its support for different languages and its matured tooling support have really grabbed the attention of many API vendors, and it seems to be the one with the most traction in the community today. Swagger is built using Scala. This means that when you package your application, you need to have the entire Scala runtime into your build, which may considerably increase the size of your deployable artifact (the EAR or WAR file). That said, Swagger is however improving with each release. For example, the Swagger 2.0 release allows you to use YAML for describing APIs. So, keep a watch on this framework. The Swagger framework has the following three major components: Server: This component hosts the RESTful web API descriptions for the services that the clients want to use Client: This component uses the RESTful web API descriptions from the server to provide an automated interfacing mechanism to invoke the REST APIs User interface: This part of the framework reads a description of the APIs from the server, renders it as a web page, and provides an interactive sandbox to test the APIs A quick overview of the Swagger structure Let's take a quick look at the Swagger file structure before moving further. The Swagger 1.x file contents that describe the RESTful APIs are represented as the JSON objects. With Swagger 2.0 release onwards, you can also use the YAML format to describe the RESTful web APIs. This section discusses the Swagger file contents represented as JSON. The basic constructs that we'll discuss in this section for JSON are also applicable for the YAML representation of APIs, although the syntax differs. When using the JSON structure for describing the REST APIs, the Swagger file uses a Swagger object as the root document object for describing the APIs. Here is a quick overview of the various properties that you will find in a Swagger object: The following properties describe the basic information about the RESTful web application: swagger: This property specifies the Swagger version. info: This property provides metadata about the API. host: This property locates the server where the APIs are hosted. basePath: This property is the base path for the API. It is relative to the value set for the host field. schemes: This property transfers the protocol for a RESTful web API, such as HTTP and HTTPS. The following properties allow you to specify the default values at the application level, which can be optionally overridden for each operation: consumes: This property specifies the default internet media types that the APIs consume. It can be overridden at the API level. produces: This property specifies the default internet media types that the APIs produce. It can be overridden at the API level. securityDefinitions: This property globally defines the security schemes for the document. These definitions can be referred to from the security schemes specified for each API. The following properties describe the operations (REST APIs): paths: This property specifies the path to the API or the resources. The path must be appended to the basePath property in order to get the full URI. definitions: This property specifies the input and output entity types for operations on REST resources (APIs). parameters: This property specifies the parameter details for an operation. responses: This property specifies the response type for an operation. security: This property specifies the security schemes in order to execute this operation. externalDocs: This property links to the external documentation. A complete Swagger 2.0 specification is available at Github. The Swagger specification was donated to the Open API initiative, which aims to standardize the format of the API specification and bring uniformity in usage. Swagger 2.0 was enhanced as part of the Open API initiative, and a new specification OAS 3.0 (Open API specification 3.0) was released in July 2017. The tools are still being worked on to support OAS3.0. I encourage you to explore the Open API Specification 3.0 available at Github. Overview of Swagger APIs The Swagger framework consists of many sub-projects in the Git repository, each built with a specific purpose. Here is a quick summary of the key projects: swagger-spec: This repository contains the Swagger specification and the project in general. swagger-ui: This project is an interactive tool to display and execute the Swagger specification files. It is based on swagger-js (the JavaScript library). swagger-editor: This project allows you to edit the YAML files. It is released as a part of Swagger 2.0. swagger-core: This project provides the scala and java library to generate the Swagger specifications directly from code. It supports JAX-RS, the Servlet APIs, and the Play framework. swagger-codegen: This project provides a tool that can read the Swagger specification files and generate the client and server code that consumes and produces the specifications. In the next section, you will learn how to use the swagger-core project offerings to generate the Swagger file for a JAX-RS application. Generating Swagger from JAX-RS Both the WADL and RAML tools that we discussed in the previous sections use the JAX-RS annotations metadata to generate the documentation for the APIs. The Swagger framework does not fully rely on the JAX-RS annotations but offers a set of proprietary annotations for describing the resources. This helps in the following scenarios: The Swagger core annotations provide more flexibility for generating documentations compliant with the Swagger specifications It allows you to use Swagger for generating documentations for web components that do not use the JAX-RS annotations, such as servlet and the servlet filter The Swagger annotations are designed to work with JAX-RS, improving the quality of the API documentation generated by the framework. Note that the swagger-core project is currently supported on the Jersey and Restlet implementations. If you are considering any other runtime for your JAX-RS application, check the respective product manual and ensure the support before you start using Swagger for describing APIs. Some of the commonly used Swagger annotations are as follows: The @com.wordnik.swagger.annotations.Api annotation marks a class as a Swagger resource. Note that only classes that are annotated with @Api will be considered for generating the documentation. The @Api annotation is used along with class-level JAX-RS annotations such as @Produces and @Path. Annotations that declare an operation are as follows: @com.wordnik.swagger.annotations.ApiOperation: This annotation describes a resource method (operation) that is designated to respond to HTTP action methods, such as GET, PUT, POST, and DELETE. @com.wordnik.swagger.annotations.ApiParam: This annotation is used for describing parameters used in an operation. This is designed for use in conjunction with the JAX-RS parameters, such as @Path, @PathParam, @QueryParam, @HeaderParam, @FormParam, and @BeanParam. @com.wordnik.swagger.annotations.ApiImplicitParam: This annotation allows you to define the operation parameters manually. You can use this to override the @PathParam or @QueryParam values specified on a resource method with custom values. If you have multiple ImplicitParam for an operation, wrap them with @ApiImplicitParams. @com.wordnik.swagger.annotations.ApiResponse: This annotation describes the status codes returned by an operation. If you have multiple responses, wrap them by using @ApiResponses. @com.wordnik.swagger.annotations.ResponseHeader: This annotation describes a header that can be provided as part of the response. @com.wordnik.swagger.annotations.Authorization: This annotation is used within either Api or ApiOperation to describe the authorization scheme used on a resource or an operation. Annotations that declare API models are as follows: @com.wordnik.swagger.annotations.ApiModel: This annotation describes the model objects used in the application. @com.wordnik.swagger.annotations.ApiModelProperty: This annotation describes the properties present in the ApiModel object. A complete list of the Swagger core annotations is available at Github. Having learned the basics of Swagger, it is time for us to move on and build a simple example to get a feel of the real-life use of Swagger in a JAX-RS application. As always, this example uses the Jersey implementation of JAX-RS. Specifying dependency to Swagger To use Swagger in your Jersey 2 application, specify the dependency to swagger-jersey2-jaxrs jar. If you use Maven for building the source, the dependency to the swagger-core library will look as follows: <dependency> <groupId>com.wordnik</groupId> <artifactId>swagger-jersey2-jaxrs</artifactId> <version>1.5.1-M1</version> <!-use the appropriate version here, 1.5.x supports Swagger 2.0 spec --> </dependency> You should be careful while choosing the swagger-core version for your product. Note that swagger-core 1.3 produces the Swagger 1.2 definitions, whereas swagger-core 1.5 produces the Swagger 2.0 definitions. The next step is to hook the Swagger provider components into your Jersey application. This is done by configuring the Jersey servlet (org.glassfish.jersey.servlet.ServletContainer) in web.xml, as shown here: <servlet> <servlet-name>jersey</servlet-name> <servlet-class> org.glassfish.jersey.servlet.ServletContainer </servlet-class> <init-param> <param-name>jersey.config.server.provider.packages </param-name> <param-value> com.wordnik.swagger.jaxrs.json, com.packtpub.rest.ch7.swagger </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jersey</servlet-name> <url-pattern>/webresources/*</url-pattern> </servlet-mapping> To enable the Swagger documentation features, it is necessary to load the Swagger framework provider classes from the com.wordnik.swagger.jaxrs.listing package. The package names of the JAX-RS resource classes and provider components are configured as the value for the jersey.config.server.provider.packages init parameter. The Jersey framework scans through the configured packages for identifying the resource classes and provider components during the deployment of the application. Map the Jersey servlet to a request URI so that it responds to the REST resource calls that match the URI. If you prefer not to use web.xml, you can also use the custom application subclass for (programmatically) specifying all the configuration entries discussed here. To try this option, refer to Github. Configuring the Swagger definition After specifying the Swagger provider components, the next step is to configure and initialize the Swagger definition. This is done by configuring the com.wordnik.swagger.jersey.config.JerseyJaxrsConfig servlet in web.xml, as follows: <servlet> <servlet-name>Jersey2Config</servlet-name> <servlet-class> com.wordnik.swagger.jersey.config.JerseyJaxrsConfig </servlet-class> <init-param> <param-name>api.version</param-name> <param-value>1.0.0</param-value> </init-param> <init-param> <param-name>swagger.api.basepath</param-name> <param-value> http://localhost:8080/hrapp/webresources </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> Here is a brief overview of the initialization parameters used for JerseyJaxrsConfig: api.version: This parameter specifies the API version for your application swagger.api.basepath: This parameter specifies the base path for your application With this step, we have finished all the configuration entries for using Swagger in a JAX-RS (Jersey 2 implementation) application. In the next section, we will see how to use the Swagger metadata annotation on a JAX-RS resource class for describing the resources and operations. Adding a Swagger annotation on a JAX-RS resource class Let's revisit the DepartmentResource class used in the previous sections. In this example, we will enhance the DepartmentResource class by adding the Swagger annotations discussed earlier. We use @Api to mark DepartmentResource as the Swagger resource. The @ApiOperation annotation describes the operation exposed by the DepartmentResource class: import com.wordnik.swagger.annotations.Api; import com.wordnik.swagger.annotations.ApiOperation; import com.wordnik.swagger.annotations.ApiParam; import com.wordnik.swagger.annotations.ApiResponse; import com.wordnik.swagger.annotations.ApiResponses; //Other imports are removed for brevity @Stateless @Path("departments") @Api(value = "/departments", description = "Get departments details") public class DepartmentResource { @ApiOperation(value = "Find department by id", notes = "Specify a valid department id", response = Department.class) @ApiResponses(value = { @ApiResponse(code = 400, message = "Invalid department id supplied"), @ApiResponse(code = 404, message = "Department not found") }) @GET @Path("{id}") @Produces("application/json") public Department findDepartment( @ApiParam(value = "The department id", required = true) @PathParam("id") Integer id){ return findDepartmentEntity(id); } //Rest of the codes are removed for brevity } To view the Swagger documentation, build the source and deploy it to the server. Once the application is deployed, you can navigate to http://<host>:<port>/<application-name>/<application-path>/swagger.json to view the Swagger resource listing in the JSON format. The Swagger URL for this example will look like the following: http://localhost:8080/hrapp/webresource/swagger.json The following sample Swagger representation is for the DepartmentResource class discussed in this section: { "swagger": "2.0", "info": { "version": "1.0.0", "title": "" }, "host": "localhost:8080", "basePath": "/hrapp/webresources", "tags": [ { "name": "user" } ], "schemes": [ "http" ], "paths": { "/departments/{id}": { "get": { "tags": [ "user" ], "summary": "Find department by id", "description": "", "operationId": "loginUser", "produces": [ "application/json" ], "parameters": [ { "name": "id", "in": "path", "description": "The department id", "required": true, "type": "integer", "format": "int32" } ], "responses": { "200": { "description": "successful operation", "schema": { "$ref": "#/definitions/Department" } }, "400": { "description": "Invalid department id supplied" }, "404": { "description": "Department not found" } } } } }, "definitions": { "Department": { "properties": { "departmentId": { "type": "integer", "format": "int32" }, "departmentName": { "type": "string" }, "_persistence_shouldRefreshFetchGroup": { "type": "boolean" } } } } } As mentioned at the beginning of this section, from the Swagger 2.0 release onward it supports the YAML representation of APIs. You can access the YAML representation by navigating to swagger.yaml. For instance, in the preceding example, the following URI gives you the YAML file: http://<host>:<port>/<application-name>/<application-path>/swagger.yaml The complete source code for this example is available at the Packt website. You can download the example from the Packt website link that we mentioned at the beginning of this book, in the Preface section. In the downloaded source code, see the rest-chapter7-service-doctools/rest-chapter7-jaxrs2swagger project. Generating a Java client from Swagger The Swagger framework is packaged with the Swagger code generation tool as well (swagger-codegen-cli), which allows you to generate client libraries by parsing the Swagger documentation file. You can download the swagger-codegen-cli.jar file from the Maven central repository by searching for swagger-codegen-cli in search maven. Alternatively, you can clone the Git repository and build the source locally by executing mvn install. Once you have swagger-codegen-cli.jar locally available, run the following command to generate the Java client for the REST API described in Swagger: java -jar swagger-codegen-cli.jar generate -i <Input-URI-or-File-location-for-swagger.json> -l <client-language-to-generate> -o <output-directory> The following example illustrates the use of this tool: java -jar swagger-codegen-cli-2.1.0-M2.jar generate -i http://localhost:8080/hrapp/webresources/swagger.json -l java -o generated-sources/java When you run this tool, it scans through the RESTful web API description available at http://localhost:8080/hrapp/webresources/swagger.json and generates a Java client source in the generated-sources/java folder. Note that the Swagger code generation process uses the mustache templates for generating the client source. If you are not happy with the generated source, Swagger lets you specify your own mustache template files. Use the -t flag to specify your template folder. To learn more, refer to the README.md file at Github. To learn more grab this latest edition of RESTful Java Web Services to build robust, scalable and secure RESTful web services using Java APIs. Getting started with Django RESTful Web Services How to develop RESTful web services in Spring Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 49304

article-image-sdlc-puts-process-at-the-center-of-software-engineering
Richard Gall
22 May 2018
7 min read
Save for later

SDLC puts process at the center of software engineering

Richard Gall
22 May 2018
7 min read
What is SDLC? SDLC stands for software development lifecycle. It refers to all of the different steps that software engineers need to take when building software. This includes planning, creating, building and then deploying software, but maintenance is also crucial too. In some instances you may need to change or replace software - that is part of the software development lifecycle as well. SDLC is about software quality and development efficiency SDLC is about more than just the steps in the software development process. It's also about managing that process in a way that improves quality while also improving efficiency. Ultimately, there are numerous ways of approaching the software development lifecycle - Waterfall and Agile are the too most well known methodologies for managing the development lifecycle. There are plenty of reasons you might choose one over another. What is most important is that you pay close attention to what the software development lifecycle looks like. It sounds obvious, but it is very difficult to build software without a plan in place. Things can get chaotic very quickly. If it does, that's bad news for you, as the developer, and bad news for users as well. When you don't follow the software development lifecycle properly, you're likely to miss user requirements, and faults will also find their way into your code. The stages of the software development lifecycle (SDLC) There are a number of ways you might see an SDLC presented but the core should always be the same. And yes, different software methodologies like Agile and Waterfall outline very different ways of working, but broadly the steps should be the same. What differs between different software methodologies is how each step fits together. Step 1: Requirement analysis This is the first step in any SDLC. This is about understanding everything that needs to be understood in as much practical detail as possible. It might mean you need to find out about specific problems that need to be solved. Or, there might be certain things that users need that you need to make sure are in the software. To do this, you need to do good quality research, from discussing user needs with a product manager to revisiting documentation on your current systems. It is often this step that is the most challenging in the software development lifecycle. This is because you need to involve a wide range of stakeholders. Some of these might not be technical, and sometimes you might simply use a different vocabulary. It's essential that you have a shared language to describe everything from the user needs to the problems you might be trying to solve. Step 2: Design the software Once you have done a requirement analysis you can begin designing the software. You do this by turning all the requirements and software specifications into a design document. This might feel like it slows down the development process, but if you don't do this, not only are you wasting the time taken to do your requirement analysis, you're also likely to build poor quality or even faulty software. While it's important not to design by committee or get slowed down by intensive nave-gazing, keeping stakeholders updated and requesting feedback and input where necessary can be incredibly important. Sometimes its worth taking that extra bit of time, as it could solve a lot of problems later in the SDLC. Step 3: Plan the project Once you have captured requirements and feel you have properly understood exactly what needs to be delivered - as well as any potential constraints - you need to plan out how you're going to build that software. To do this you'll need to have an overview of the resources at your disposal. These are the sorts of questions you'll need to consider at this stage: Who is available? Are there any risks? How can we mitigate them? What budget do we have for this project? Are there any other competing projects? In truth, you'll probably do this during the design stage. The design document you create should, of course, be developed with context in mind. It's pointless creating a stunning design document, outlining a detailed and extensive software development project if it's simply not realistic for your team to deliver it. Step 4: Start building the software Now you can finally get down to the business of actually writing code. With all the work you have done in the previous steps this should be a little easier. However, it's important to remember that imperfection is part and parcel of software engineering. There will always be flaws in your software. That doesn't necessarily mean bugs or errors, however; it could be small compromises that need to be made in order to ensure something works. The best approach here is to deliver rapidly. The sooner you can get software 'out there' the faster you can make changes and improvements if (or more likely when) they're needed. It's worth involving stakeholders at this stage - transparency in the development process is a good way to build collaboration and ensure the end result delivers on what was initially in the requirements. Step 5: Testing the software Testing is, of course, an essential step in the software development lifecycle. This is where you identify any problems. That might be errors or performance issues, but you may find you haven't quite been able to deliver what you said you would in the design document. The continuous integration server is important here, as the continuous integration server can help to detect any problems with the software. The rise of automated software testing has been incredibly valuable; it means that instead of spending time manually running tests, engineers can dedicate more time to fixing problems and optimizing code. Step 6: Deploy the software The next step is to deploy the software to production. All the elements of the software should now be in place, and you want it to simply be used. It's important to remember that there will be problems here. Testing can never capture every issue, and feedback and insight from users are going to be much more valuable than automated tests run on a server. Continuous delivery pipelines allow you to deploy software very efficiently. This makes the build-test-deploy steps of the software development lifecycle to be relatively frictionless. Okay, maybe not frictionless - there's going to be plenty of friction when you're developing software. But it does allow you to push software into production very quickly. Step 7: Maintaining software Software maintenance is a core part of the day-to-day life of a software engineer. Its a crucial step in the SDLC. There are two forms of software maintenance; both are of equal importance. Evolutive maintenance and corrective maintenance. Evolutive maintenance As the name suggests, evolutive maintenance is where you evolve software by adding in new functionality or making larger changes to the logic of the software. These changes should be a response to feedback from stakeholders or, more importantly, users. There may be times when business needs dictate this type of maintenance - this is never ideal, but it is nevertheless an important part of a software engineer's work. Corrective maintenance Corrective maintenance isn't quite as interesting or creative as evolutive maintenance - it's about fixing bugs and errors in the code. This sort of maintenance can feel like a chore, and ideally you want to minimize the amount of time you spend doing this. However, if you're following SDLC closely, you shouldn't find too many bugs in your software. The benefits of SDLC are obvious The benefits of SDLC are clear. It puts process at the center of software engineering. Without those processes it becomes incredibly difficult to build the software that stakeholders and users want. And if you don't care about users then, really, why build software at all. It's true that DevOps has done a lot to change SDLC. Arguably, it is an area that is more important and more hotly debated than ever before. It's not difficult to find someone with an opinion on the best way to build something. Equally, as software becomes more fragmented and mutable, thanks to the emergence of cloud and architectural trends like microservices and serverless, the way we design, build and deploy software has never felt more urgent. Read next DevOps Engineering and Full-Stack Development – 2 Sides of the Same Agile Coin
Read more
  • 0
  • 0
  • 14572

article-image-firebase-nativescript-cross-platform-app-development
Savia Lobo
22 May 2018
16 min read
Save for later

How to integrate Firebase with NativeScript for cross-platform app development

Savia Lobo
22 May 2018
16 min read
NativeScript is now considered as one of the hottest platforms attracting developers. By using XML, JavaScript (also Angular), minor CSS for the visual aspects, and the magical touch of incorporating native SDKs, the platform has adopted the best of both worlds. Plus, this allows applications to be cross-platform, which means your application can run on both Android and iOS! In this tutorial, we're going to see how we can use Firebase to create some awesome native applications. This article is an excerpt taken from the book,' Firebase Cookbook', written by Houssem Yahiaoui. Getting started with NativeScript project In order to start a NativeScript project, we will need to make our development environment Node.js ready. So, in order to do so, let's download Node.js. Head directly to https://nodejs.org/en/download/ and download the most suitable version for your OS. After you have successfully installed Node.js, you will have two utilities. One is the Node.js executable and the other will be npm or the node package manager. This will help us download our dependencies and install NativeScript locally. How to do it... In your terminal/cmd of choice, type the following command: ~> npm install -g nativescript After installing NativeScript, you will need to add some dependencies. To know what your system is missing, simply type the following command: ~> tns doctor This command will test your system and make sure that everything is in place. If not, you'll get the missing parts. In order to create a new project, you will have many options to choose from. Those options could be creating a new project with vanilla JavaScript, with Typescript, from a template, or even better, using Angular. Head directly to your working directory and type the following command: ~> tns create <application-name> This command will initialize your project and install the basic needed dependencies. With that done, we're good to start working on our NativeScript project. Adding the Firebase plugin to our application One of NativeScript's most powerful features is the possibility of incorporating truly native SDKs. So, in this context, we can install the Firebase NativeScript on Android using the normal Gradle installation command. You can also do it on iOS via a Podfile if you are running macOS and want to create an iOS application along the way. However, the NativeScript ecosystem is pluggable, which means the ecosystem has plugins that extend certain functionalities. Those plugins usually incorporate native SDKs and expose the functionalities using JavaScript so we can exploit them directly within our application. In this recipe, we're going to use the wonderfully easy-to-use Eddy Verbruggen Firebase plugin, so let's see how we can add it to our project. How to do it... Head to your terminal/cmd of choice, type the following command, and hit Return/Enter respectively: tns plugin add nativescript-plugin-firebase This command will install the necessary plugin and do the required configuration. To find out the id, open your package.json file where you will find the NativeScript value: "nativescript": { "id": "org.nativescript.<your-app-name>" } Copy the id that you found in the preceding step and head over to your Firebase project console. Create a new Android/iOS application and paste that ID over your bundle name. Download the google-service.json/GoogleServices-Info.plist files and paste google-server.json in your app/Application_Resources/Android folder if you created an Android project. If you've created an iOS project, then paste the GoogleServices-Info.plist in the app/Application_Resources/iOS folder. Pushing/retrieving data from the Firebase Real-time Database Firebase stores data in a link-based manner that allows you to add and query the information really simple. The NativeScript Firebase plugin makes the operation much simpler with an easy-to-use API. So, let's discover how we can perform such operations. In this recipe, we're going to see how we can add and retrieve data in NativeScript and Firebase. Before we begin, we need to make sure that our application is fully configured with Firebase. You will also need to initialize the Firebase plugin with the application. To do that, open your project, head to your app.js file, and add the following import code: var firebase = require("nativescript-plugin- firebase"); This will import the Firebase NativeScript plugin. Next, add the following lines of code: firebase.init({}).then((instance) => { console.log("[*] Firebase was successfully initialised"); }, (error) => { console.log("[*] Huston we've an initialization error: " + error); }); The preceding code will simply go and initialize Firebase within our application. How to do it... After initializing our application, let's see how we can push some data to our Firebase Realtime Database. Let's start first by adding our interface, which will look similar to the one (Figure 1): Figure 1: Ideas adding page. The code behind it is as follows, and you can use this to implement the addition of new data to your bucket: <Page xmlns="http://schemas.nativescript.org/tns.xsd" navigatingTo="onNavigatingTo" class="page"> <Page.actionBar> <ActionBar title="Firebase CookBook" class="action-bar"> </ActionBar> </Page.actionBar> <StackLayout class="p-20"> <TextField text="{{ newIdea }}" hint="Add here your shiny idea"/> <Button text="Add new idea to my bucket" tap="{{ addToMyBucket }}" </span>class</span>="btn btn-primary btn- active"/> </StackLayout> </Page> Now let's see the JavaScript related to this UI for our behavior. Head to your view-model and add the following snippets inside the createViewModel function: viewModel.addToMyBucket = () => { firebase.push('/ideas', { idea: viewModel.newIdea }).then((result) => { console.log("[*] Info : Your data was pushed !"); }, (error) => { console.log("[*] Error : While pushing your data to Firebase, with error: " + error); }); } If you check your Firebase database, you will find your new entry present there. Once your data is live, you will need to think of a way to showcase all your shiny, new ideas. Firebase gave us a lovely event that we shall listen to whenever a new child element is created. The following code teaches you how to create the event for showcasing the addition of new child elements: var onChildEvent = function(result) { console.log("Idea: " + JSON.stringify(result.value)); }; firebase.addChildEventListener(onChildEvent, "/ideas").then((snapshot) => { console.log("[*] Info : We've some data !"); }); After getting the newly-added child, it's up to you to find the proper way to bind your ideas. They are mainly going to be either lists or cards, but they could be any of the previously mentioned ones. To run and experience your new feature, use the following command: ~> tns run android # for android ~> tns run ios # for ios How it works... So what just happened? We defined a basic user interface that will serve us by adding those new ideas to our Firebase console application. Next, the aim was to save all that information inside our Firebase Realtime Database using the same schema that Firebase uses. This is done via specifying a URL where all your information will be stored and then specifying the data schema. This will finally hold and define the way our data will be stored. We then hooked a listener to our data URL using firebase.addChildEventListener. This will take a function where the next item will be held and the data URL that we want our listener hook to listen on. In case you're wondering how this module or service works in NativeScript, the answer is simple. It's due to the way NativeScript works; because one of NativeScript's powerful features is the ability to use native SDKs. So in this case, we're using and implementing the Firebase Database Android/iOS SDKs for our needs, and the plugin APIs we're using are the JavaScript abstraction of how we want to exploit our native calls. Authenticating using anonymous or password authentication As we all know, Firebase supports both anonymous and password-based authentication, each with its own, suitable use case. So in this recipe, we're going to see how we can perform both anonymous and password authentication. You will need to initialize the Firebase plugin with the application. To do that, open your project, head to your app.js file, and add the following import: var firebase = require("nativescript-plugin- firebase"); This will import the Firebase NativeScript plugin. Next, add the following lines of code: firebase.init({}).then((instance) => { console.log("[*] Firebase was successfully initialised"); }, (error) => { console.log("[*] Huston we've an initialization error: " + error); }); The preceding code will go and initialize Firebase within our application. How to do it... Before we start, we need to create some UI elements. Your page will look similar to this one after you finish (Figure 2): Figure 2: Application login page. Now open your login page and add the following code snippets there: <Page xmlns="http://schemas.nativescript.org/tns.xsd" navigatingTo="onNavigatingTo" class="page"> <Page.actionBar> <ActionBar title="Firebase CookBook" icon="" class="action-bar"> </ActionBar> </Page.actionBar> <StackLayout> <Label text="Login Page" textWrap="true" style="font-weight: bold; font-size: 18px; text-align: center; padding:20"/> <TextField hint="Email" text="{{ user_email }}" /> <TextField hint="Password" text="{{ user_password }}" secure="true"/&gt;<Button text="LOGIN" tap="{{ passLogin }}" class="btn btn-primary btn-active" /> <Button text="Anonymous login" tap="{{ anonLogin }}" class="btn btn-success"/> </StackLayout> </Page> Save that. Let's now look at the variables and functions in our view-model file. For that, let's implement the passLogin and anonLogin functions. The first one will be our normal email and password authentication, and the second will be our anonymous login function. To make this implementation come alive, type the following code lines on your page: viewModel.anonLogin = () => { firebase.login({ type: firebase.LoginType.ANONYMOUS }).then((result) => { console.log("[*] Anonymous Auth Response:" + JSON.stringify(result)); },(errorMessage) => { console.log("[*] Anonymous Auth Error: "+errorMessage); }); } viewModel.passLogin = () => { let email = viewModel.user_email; let pass = viewModel.user_password; firebase.login({ type: firebase.LoginType.PASSWORD, passwordOptions: { email: email, password: pass } }).then((result) => { console.log("[*] Email/Pass Response : " + JSON.stringify(result)); }, (error) => { console.log("[*] Email/Pass Error : " + error); }); } Now, simply save your file and run it using the following command: ~> tns run android # for android ~> tns run ios # for ios How it works... Let's quickly understand what we've just done in the recipe: We've built the UI we needed as per the authentication type. If we want the email and password one, we will need the respective fields, whereas, for anonymous authentication, all we need is a button. Then, for both functions, we call the Firebase login button specifying the connection type for both cases. After finishing that part, it's up to you to define what is next and to retrieve that metadata from the API for your own needs later on. Authenticating using Google Sign-In Google Sign-In is one of the most popular integrated services in Firebase. It does not require any extra hustle, has the most functionality, and is popular among many apps. In this recipe, we're going to see how we can integrate Firebase Google Sign-In with our NativeScript project. You will need to initialize the Firebase plugin within the application. To do that, open your project, head to your app.js file, and add the following line: var firebase = require("nativescript-plugin- firebase"); This will import the Firebase NativeScript plugin. Next, add the following lines of code: firebase.init({}).then((instance) => { console.log("[*] Firebase was successfully initialised"); }, (error) => { console.log("[*] Huston we've an initialization error: " + error); }); The preceding code will go and initialize Firebase within our application. We will also need to install some dependencies. For that, underneath the NativeScript-plugin-firebase folder | platform | Android | include.gradle file, uncomment the following entry for Android: compile "com.google.android.gms:play-services- auth:$googlePlayServicesVersion" Now save and build your application using the following command: ~> tns build android Or uncomment this entry if you're building an iOS application: pod 'GoogleSignIn' Then, build your project using the following command: ~> tns build ios How to do it... First, you will need to create your button. So for this to happen, please go to your login-page.xml file and add the following button declaration: <Button text="Google Sign-in" tap="{{ googleLogin }}" class="btn" style="color:red"/> Now let's implement the googleLogin() function by using the following code snippet: viewModel.googleLogin = () => { firebase.login({ type: firebase.LoginType.GOOGLE, }).then((result) => { console.log("[*] Google Auth Response: " + JSON.</span>stringify(result)); },(errorMessage) => { console.log("[*] Google Auth Error: " + errorMessage); }); } To build and experience your new feature, use the following command: ~> tns run android # for android ~> tns run ios # for ios Now, once you click on the Google authentication button, you should have the following (Figure 3): Figure 3: Account picking after clicking on Google Login button. Don't forget to add your SHA-1 fingerprint code or the authentication process won't finish. How it works... Let's explain what just happened in the preceding code: We added the new button for the Google authentication. Within the tap event of this button, we gave it the googleLogin() function. Within  googleLogin(), we used the Firebase login button giving it firebase.LoginType.GOOGLE as type. Notice that, similar to normal Google authentication on a web platform, we can also give the hd or the hostedDomain option. We could also use the option of filtering the connection hosting we want by adding the following option under the login type: googleOptions: { hostedDomain: "<your-host-name>" } The hd option or the hostedDomain is simply what's after the @ sign in an email address. So, for example, in the email ID cookbook@packtpub.com the hosted domain is packtpub.com. For some apps, you might want to limit the email ID used by users when they connect to your application to just that host. This can be done by providing only the hostedDomain parameter in the code line pertaining to the storage of the email address. When you look at the actual way we're making these calls, you will see that it's due to the powerful NativeScript feature that lets us exploit native SDK. If you remember the Getting ready section of this recipe, we uncommented a section where we installed the native SDK for both Android and iOS. Besides the NativeScript firebase plugin, you can also exploit the Firebase Auth SDK, which will let you exploit all supported Firebase authentication methods. Adding dynamic behavior using Firebase Remote Config Remote Config is one of the hottest features of Firebase and lets us play with all the different application configurations without too much of a headache. By a headache, we mean the process of building, testing, and publishing, which usually takes a lot of our time, even if it's just to fix a small float number that might be wrong. So in this recipe, we're going to see how we can use Firebase Remote Config within our application. In case you didn't choose the functionality by default when you created your application, please head to your build.gradle and Podfile and uncomment the Firebase Remote Config line in both files or in the environment you're using with your application. How to do it... Actually, the integration part of your application is quite easy. The tricky part is when you want to toggle states or alter some configuration. So think upon that heavily, because it will affect how your application works and will also affect the way you change properties. Let's suppose that within this NativeScript application we want to have a mode called Ramadan mode. We want to create this mode for a special month where we wish to offer discounts, help our users with new promos, or even change our user interface to suit the spirit of it. So, let's see how we can do that: firebase.getRemoteConfig({ developerMode: true, cacheExpirationSeconds: 1, properties: [{ key: "ramadan_promo_enabled", default: false } }).then(function (result) { console.log("Remote Config: " + JSON.stringify( result.properties.ramadan_promo_enabled)); //TODO : Use the value to make changes. }); In the preceding code, and because we are still in development mode, we set that we want the developerMode to be activated. We also set the cacheExpirationSeconds to be one second. This is important because we don't want our settings to take a long time until they affect our application during the development phase. This will set the throttled mode to true, which will make the application fetch or look for new data every second to our Firebase remote configurations. We can set the default values of each and every item within our Firebase remote configuration. This value will be the starting point for fetching any new values that might be present over the Firebase project console. Now, let's see how we can wire that value from the project console. To do this, head to your Firebase project Console | Remote Config Section | ADD YOUR FIRST PARAMETER Button (Figure 4): Figure 4: Firebase Remote Config Parameter adding section. Next, you will get a modal where you will add your properties and their values. Make sure to add the exact same one that's in your code otherwise it won't work. The following screenshot shows the PARAMETERS tab of the console where you will add the properties (Figure 5): Figure 5: While adding the new parameter After adding them, click on the PUBLISH CHANGES button (Figure 6): Figure 6: Publishing the new created Parameter. With that, you're done. Exit your application and open it back up again. Watch how your console and your application fetches the new values. Then, it's up to you and your application to make the needed changes once the values are changed. How it works... Let's explain what just happened: We added back our dependencies from the build.gradle and Podfile so we can support the functionality we want to use. We've selected and added the code that will be responsible for giving the default values and for fetching the new changes. We have also activated the developer mode, which will help out in our development and staging phases. This mode will be disabled once we're in production. We've set the cache expiration time, which is essential while being in development so we can retrieve those values in a fast way. This too will be changed in production, by giving the cache more expiration time, because we don't want to jeopardize our application with high-throttled operations every second. We've added our support config in our Firebase Remote Config parameters, gave it the necessary value, and published it. This final step will control the way our application feels and looks like each new change. We learned how to integrate Firebase with NativeScript using various recipes. If you've enjoyed this article, do check out 'Firebase Cookbook' to change the way you develop and make your app a first class citizen of the cloud.
Read more
  • 0
  • 0
  • 40139
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-unity-arcore-application-android
Sugandha Lahoti
21 May 2018
11 min read
Save for later

Build an ARCore app with Unity from scratch

Sugandha Lahoti
21 May 2018
11 min read
In this tutorial, we will learn to install, build, and deploy Unity ARCore apps for Android. Unity is a leading cross-platform game engine that is exceptionally easy to use for building game and graphic applications quickly. Unity has developed something of a bad reputation in recent years due to its overuse in poor-quality games. It isn't because Unity can't produce high-quality games, it most certainly can. However, the ability to create games quickly often gets abused by developers seeking to release cheap games for profit. This article is an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. The following is a summary of the topics we will cover in this article: Installing Unity and ARCore Building and deploying to Android Remote debugging Exploring the code Installing Unity and ARCore Installing the Unity editor is relatively straightforward. However, the version of Unity we will be using may still be in beta. Therefore, it is important that you pay special attention to the following instructions when installing Unity: Navigate a web browser to https://unity3d.com/unity/beta. At the time of writing, we will use the most recent beta version of Unity since ARCore is also still in beta preview. Be sure to note the version you are downloading and installing. This will help in the event you have issues working with ARCore. Click on the Download installer button. This will download UnityDownloadAssistant. Launch UnityDownloadAssistant. Click on Next and then agree to the Terms of Service. Click on Next again. Select the components, as shown: Install Unity in a folder that identifies the version, as follows: Click on Next to download and install Unity. This can take a while, so get up, move around, and grab a beverage. Click on the Finish button and ensure that Unity is set to launch automatically. Let Unity launch and leave the window open. We will get back to it shortly. Once Unity is installed, we want to download the ARCore SDK for Unity. This will be easy now that we have Git installed. Follow the given instructions to install the SDK: Open a shell or Command Prompt. Navigate to your Android folder. On Windows, use this: cd C:Android Type and execute the following: git clone https://github.com/google-ar/arcore-unity-sdk.git After the git command completes, you will see a new folder called arcore-unity-sdk. If this is your first time using Unity, you will need to go online to https://unity3d.com/ and create a Unity user account. The Unity editor will require that you log in on first use and from time to time. Now that we have Unity and ARCore installed, it's time to open the sample project by implementing the following steps: If you closed the Unity window, launch the Unity editor. The path on Windows will be C:Unity 2017.3.0b8EditorUnity.exe. Feel free to create a shortcut with the version number in order to make it easier to launch the specific Unity version later. Switch to the Unity project window and click on the Open button. Select the Android/arcore-unity-sdk folder. This is the folder we used the git command to install the SDK to earlier, as shown in the following dialog: Click on the Select Folder button. This will launch the editor and load the project. Open the Assets/GoogleARCore/HelloARExample/Scenes folder in the Project window, as shown in the following excerpt: Double-click on the HelloAR scene, as shown in the Project window and in the preceding screenshot. This will load our AR scene into Unity. At any point, if you see red console or error messages in the bottom status bar, this likely means you have a version conflict. You will likely need to install a different version of Unity. Now that we have Unity and ARCore installed, we will build the project and deploy the app to an Android device in the next section. Building and deploying to Android With most Unity development, we could just run our scene in the editor for testing. Unfortunately, when developing ARCore applications, we need to deploy the app to a device for testing. Fortunately, the project we are opening should already be configured for the most part. So, let's get started by following the steps in the next exercise: Open up the Unity editor to the sample ARCore project and open the HelloAR scene. If you left Unity open from the last exercise, just ignore this step. Connect your device via USB. From the menu, select File | Build Settings. Confirm that the settings match the following dialog: Confirm that the HelloAR scene is added to the build. If the scene is missing, click on the Add Open Scenes button to add it. Click on Build and Run. Be patient, first-time builds can take a while. After the app gets pushed to the device, feel free to test it, as you did with the Android version. Great! Now we have a Unity version of the sample ARCore project running. In the next section, we will look at remotely debugging our app. Remote debugging Having to connect a USB all the time to push an app is inconvenient. Not to mention that, if we wanted to do any debugging, we would need to maintain a physical USB connection to our development machine at all times. Fortunately, there is a way to connect our Android device via Wi-Fi to our development machine. Use the following steps to establish a Wi-Fi connection: Ensure that a device is connected via USB. Open Command Prompt or shell. On Windows, we will add C:Androidsdkplatform-tools to the path just for the prompt we are working on. It is recommended that you add this path to your environment variables. Google it if you are unsure of what this means. Enter the following commands: //WINDOWS ONLY path C:Androidsdkplatform-tools //FOR ALL adb devices adb tcpip 5555 If it worked, you will see restarting in TCP mode port: 5555. If you encounter an error, disconnect and reconnect the device. Disconnect your device. Locate the IP address of your device by doing as follows: Open your phone and go to Settings and then About phone. Tap on Status. Note down the IP address. Go back to your shell or Command Prompt and enter the following: adb connect [IP Address] Ensure that you use the IP Address you wrote down from your device. You should see connected to [IP Address]:5555. If you encounter a problem, just run through the steps again. Testing the connection Now that we have a remote connection to our device, we should test it to ensure that it works. Let's test our connection by doing the following: Open up Unity to the sample AR project. Expand the Canvas object in the Hierarchy window until you see the SearchingText object and select it, just as shown in the following excerpt: Hierarchy window showing the selected SearchingText object Direct your attention to the Inspector window, on the right-hand side by default. Scroll down in the window until you see the text "Searching for surfaces…". Modify the text to read "Searching for ARCore surfaces…", just as we did in the last chapter for Android. From the menu, select File | Build and Run. Open your device and test your app. Remotely debugging a running app Now, building and pushing an app to your device this way will take longer, but it is far more convenient. Next, let's look at how we can debug a running app remotely by performing the following steps: Go back to your shell or Command Prompt. Enter the following command: adb logcat You will see a stream of logs covering the screen, which is not something very useful. Enter Ctrl + C (command + C on Mac) to kill the process. Enter the following command: //ON WINDOWS C:Androidsdktoolsmonitor.bat //ON LINUX/MAC cd android-sdk/tools/ monitor This will open Android Device Monitor. You should see your device on the list to the left. Ensure that you select it. You will see the log output start streaming in the LogCat window. Drag the LogCat window so that it is a tab in the main window, as illustrated: Android Device Monitor showing the LogCat window Leave the Android Device Monitor window open and running. We will come back to it later. Now we can build, deploy, and debug remotely. This will give us plenty of flexibility later when we want to become more mobile. Of course, the remote connection we put in place with adb will also work with Android Studio. Yet, we still are not actually tracking any log output. We will output some log messages in the next section. Exploring the code Unlike Android, we were able to easily modify our Unity app right in the editor without writing code. In fact, given the right Unity extensions, you can make a working game in Unity without any code. However, for us, we want to get into the nitty-gritty details of ARCore, and that will require writing some code. Jump back to the Unity editor, and let's look at how we can modify some code by implementing the following exercise: From the Hierarchy window, select the ExampleController object. This will pull up the object in the Inspector window. Select the Gear icon beside Hello AR Controller (Script) and from the context menu, select Edit Script, as in the following excerpt: This will open your script editor and load the script, by default, MonoDevelop. Unity supports a number of Integrated Development Environments (IDEs) for writing C# scripts. Some popular options are Visual Studio 2015-2017 (Windows), VS Code (All), JetBrains Rider (Mac), and even Notepad++(All). Do yourself a favor and try one of the options listed for your OS.   Scroll down in the script until you see the following block of code: public void Update () { _QuitOnConnectionErrors(); After the _QuitOnConnectionErrors(); line of code, add the following code: Debug.Log("Unity Update Method"); Save the file and then go back to Unity. Unity will automatically recompile the file. If you made any errors, you will see red error messages in the status bar or console. From the menu, select File | Build and Run. As long as your device is still connected via TCP/IP, this will work. If your connection broke, just go back to the previous section and reset it. Run the app on the device. Direct your attention to Android Device Monitor and see whether you can spot those log messages. Unity Update method The Unity Update method is a special method that runs before/during a frame update or render. For your typical game running at 60 frames per second, this means that the Update method will be called 60 times per second as well, so you should be seeing lots of messages tagged as Unity. You can filter these messages by doing the following: Jump to the Android Device Monitor window. Click on the green plus button in the Saved Filters panel, as shown in the following excerpt: Adding a new tag filter Create a new filter by entering a Filter Name (use Unity) and by Log Tag (use Unity), as shown in the preceding screenshot. Click on OK to add the filter. Select the new Unity filter. You will now see a list of filtered messages specific to Unity platform when the app is running on the device. If you are not seeing any messages, check your connection and try to rebuild. Ensure that you saved your edited code file in MonoDevelop as well. Good job. We now have a working Unity set up with remote build and debug support. In this post,  we installed Unity and the ARCore SDK for Unity. We then took a slight diversion by setting up a remote build and debug connection to our device using TCP/IP over Wi-Fi. Next, we tested out our ability to modify the C# script in Unity by adding some debug log output. Finally, we tested our code changes using the Android Device Monitor tool to filter and track log messages from the Unity app deployed to the device. To know how to setup web development with JavaScript in ARCore and look through the various sample ARCore templates, check out the book Learn ARCore - Fundamentals of Google ARCore. Getting started with building an ARCore application for Android Unity plugins for augmented reality application development Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 30770

article-image-use-m-functions-within-power-bi-querying-data
Amarabha Banerjee
21 May 2018
10 min read
Save for later

How to use M functions within Microsoft Power BI for querying data

Amarabha Banerjee
21 May 2018
10 min read
Microsoft Power BI Desktop contains a rich set of data source connectors and transformation capabilities that support the integration and enhancement of source data. These features are all driven by a powerful functional language and query engine, M, which leverages source system resources when possible and can greatly extend the scope and robustness of the data retrieval process beyond the possibilities of the standard query editor interface alone. As with almost all BI projects, the design and development of the data access and retrieval process has great implications for the analytical value, scalability, and sustainability of the overall Power BI solution. [box type="note" align="" class="" width=""]Our article is an excerpt from the book Microsoft Power BI Cookbook, written by Brett Powell. This book shows how to leverage  Microsoft Power BI and the development tools to create better data driven analytics and visualizations. [/box] In this article, we dive into Power BI Desktop's Get Data experience and go through the process of establishing and managing data source connections and queries. Examples are provided of using the Query Editor interface and the M language directly to construct and refine queries to meet common data transformation and cleansing needs. In practice and as per the examples, a combination of both tools is recommended to aid the query development process. Viewing and analyzing M functions Every time you click on a button to connect to any of Power BI Desktop's supported data sources or apply any transformation to a data source object, such as changing a column's data type, one or multiple M expressions are created reflecting your choices. These M expressions are automatically written to dedicated M documents and, if saved, are stored within the Power BI Desktop file as Queries. M is a functional programming language like F#, and it's important that Power BI developers become familiar with analyzing and later writing and enhancing the M code that supports their queries. Getting ready Build a query through the user interface that connects to the AdventureWorksDW2016CTP3 SQL Server database on the ATLAS server and retrieves the DimGeography table, filtered by United States for English. Click on Get Data from the Home tab of the ribbon, select SQL Server from the list of database sources, and provide the server and database names. For the Data Connectivity mode, select Import. A navigation window will appear, with the different objects and schemas of the database. Select the DimGeography table from the Navigation window and click on Edit. In the Query Editor window, select the EnglishCountryRegionName column and then filter on United States from its dropdown. Figure 2: Filtering for United States only in the Query Editor At this point, a preview of the filtered table is exposed in the Query Editor and the Query Settings pane displays the previous steps. Figure 3: The Query Settings pane in the Query Editor How to do it Formula Bar With the Formula Bar visible in the Query Editor, click on the Source step under Applied Steps in the Query Settings pane. You should see the following formula expression: Figure 4: The SQL.Database() function created for the Source step Click on the Navigation step to expose the following expression: Figure 5: The metadata record created for the Navigation step The navigation expression (2) references the source expression (1) The Formula Bar in the Query Editor displays individual query steps, which are technically individual M expressions It's convenient and very often essential to view and edit all the expressions in a centralized window, and for this, there's the Advanced Editor M is a functional language, and it can be useful to think of query evaluation in M as similar to Excel spreadsheet formulas in which multiple formulas can reference each other. The M engine can determine which expressions are required by the final expression to return and evaluate only those expressions. Configuring Power BI Development Tools, the display setting for both the Query Settings pane and the Formula bar should be enabled as GLOBAL | Query Editor options. Figure 6: Global layout options for the Query Editor Alternatively, on a per file basis, you can control these settings and others from the View tab of the Query Editor toolbar. Figure 7: Property settings of the View tab in the Query Editor Advanced Editor window Given its importance to the query development process, the Advanced Editor dialog is exposed on both the Home and View tabs of the Query Editor. It's recommended to use the Query Editor when getting started with a new query and when learning the M language. After several steps have been applied, use the Advanced Editor to review and optionally enhance or customize the M query. As a rich, functional programming language, there are many M functions and optional parameters not exposed via the Query Editor; going beyond the limits of the Query Editor enables more robust data retrieval and integration processes. Figure 8: The Home tab of the Query Editor Click on Advanced Editor from either the View or Home tabs (Figure 8 and Figure 9, respectively). All M function expressions and any comments are exposed Figure 9: The Advanced Editor view of the DimGeography query When developing retrieval processes for Power BI models, consider these common ETL questions: How are our queries impacting the source systems? Can we make our retrieval queries more resilient to changes in source data such that they avoid failure? Is our retrieval process efficient and simple to follow and support or are there unnecessary steps and queries? Are our retrieval queries delivering sufficient performance to the BI application? Is our process flexible such that we can quickly apply changes to data sources and logic? M queries are not intended as a substitute for the workloads typically handled by enterprise ETL tools such as SSIS or Informatica. However, just as BI professionals would carefully review the logic and test the performance of SQL stored procedures and ETL packages supporting their various cubes and reports environment, they should also review the M queries created to support Power BI models and reports. How it works Two of the top performance and scalability features of M's engine are Query Folding and Lazy Evaluation. If possible, the M queries developed in Power BI Desktop are converted (folded) into SQL statements and passed to source systems for processing. M can also reduce the required resources for a given query by ignoring any unnecessary or redundant steps (variables). M is a case-sensitive language. This includes referencing variables in M expressions (RenameColumns versus Renamecolumns) as well as the values in M queries. For example, the values "Apple" and "apple" are considered unique values in an M query; the Table.Distinct() function will not remove rows for one of the values. Variable names in M expressions cannot have spaces without a hash sign and double quotes. Per Figure 10, when the Query Editor graphical interface is used to create M queries this syntax is applied automatically, along with a name describing the M transformation applied. Applying short, descriptive variable names (with no spaces) improves the readability of M queries.  Query folding The query from this recipe was "folded" into the following SQL statement and sent to the ATLAS server for processing. Figure 10: The SQL statement generated from the DimGeography M query Right-click on the Filtered Rows step and select View Native Query to access the Native Query window from Figure 11: Figure 11: View Native Query in Query Settings Finding and revising queries that are not being folded to source systems is a top technique for enhancing large Power BI datasets. See the Pushing Query Processing Back to Source Systems recipe of Chapter 11, Enhancing and Optimizing Existing Power BI Solutions for an example of this process. M query structure The great majority of queries created for Power BI will follow the let...in structure as per this recipe, as they contain multiple steps with dependencies among them. Individual expressions are separated by commas. The expression referred to following the in keyword is the expression returned by the query. The individual step expressions are technically "variables", and if the identifiers for these variables (the names of the query steps) contain spaces then the step is placed in quotes, and prefixed with a # sign as per the Filtered Rows step in Figure 10. Lazy evaluation The M engine also has powerful "lazy evaluation" logic for ignoring any redundant or unnecessary variables, as well as short-circuiting evaluation (computation) once a result is determinate, such as when one side (operand) of an OR logical operator is computed as True. The order of evaluation of the expressions is determined at runtime; it doesn't have to be sequential from top to bottom. In the following example, a step for retrieving Canada was added and the step for the United States was ignored. Since the CanadaOnly variable satisfies the overall let expression of the query, only the Canada query is issued to the server as if the United States row were commented out or didn't exist. Figure 12: Revised query that ignores Filtered Rows step to evaluate Canada only View Native Query (Figure 12) is not available given this revision, but a SQL Profiler trace against the source database server (and a refresh of the M query) confirms that CanadaOnly was the only SQL query passed to the source database. Figure 13: Capturing the SQL statement passed to the server via SQL Server Profiler trace There's more Partial query folding A query can be "partially folded", in which a SQL statement is created resolving only part of an overall query The results of this SQL statement would be returned to Power BI Desktop (or the on-premises data gateway) and the remaining logic would be computed using M's in-memory engine with local resources M queries can be designed to maximize the use of the source system resources, by using standard expressions supported by query folding early in the query process Minimizing the use of local or on-premises data gateway resources is a top consideration Limitations of query folding No folding will take place once a native SQL query has been passed to the source system. For example, passing a SQL query directly through the Get Data dialog. The following query, specified in the Get Data dialog, is included in the Source Step: Figure 14: Providing a user defined native SQL query Any transformations applied after this native query will use local system resources. Therefore, the general implication for query development with native or user-defined SQL queries is that if they're used, try to include all required transformations (that is, joins and derived columns), or use them to utilize an important feature of the source database not being utilized by the folded query, such as an index. Not all data sources support query folding, such as text and Excel files. Not all transformations available in the Query Editor or via M functions directly are supported by some data sources. The privacy levels defined for the data sources will also impact whether folding is used or not. SQL statements are not parsed before they're sent to the source system. The Table.Buffer() function can be used to avoid query folding. The table output of this function is loaded into local memory and transformations against it will remain local. We have discussed effective techniques for accessing and retrieving data using Microsoft Power BI. Do check out this book Microsoft Power BI Cookbook for more information on using Microsoft power BI for data analysis and visualization. Expert Interview: Unlocking the secrets of Microsoft Power BI Tutorial: Building a Microsoft Power BI Data Model Expert Insights:Ride the third wave of BI with Microsoft Power BI    
Read more
  • 0
  • 0
  • 22707

article-image-this-week-on-packt-hub-18-may-2018
Aarthi Kumaraswamy
18 May 2018
3 min read
Save for later

This week on Packt Hub – 18 May 2018

Aarthi Kumaraswamy
18 May 2018
3 min read
We've just completely revamped our weekly newsletter. We'd love to hear from us on hub@packtpub.com. If you like what you see, make sure to subscribe and tell others too. Here’s what you may have missed in the last 7 days – Tech news, insights and tutorials… Featured interview “The wider use of generator functions in Python 3 has made functional Python programming much more common.” — Steven F. Lott We recently interviewed Steven F. Lott, a programmer since the 70s, a true Python professional and best-selling author of a number of Python books. In this interview, Steven talks a bit about modern Python and how the language adapts well to the Functional paradigm, offering developers a range of solutions to build modern, cloud-based applications. Tech news Conferences covered this week PyCon US 2018 Highlights: Quantum computing, blockchains, and serverless rule! What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Data news in depth Shiny 1.1.0 releasing soon pandas 0.23 released Development & programming news in depth What can you expect from the upcoming Java 11 JDK? Android P new features: artificial intelligence, digital wellbeing, and simplicity Firefox 60 arrives with exciting updates for web developers: Quantum CSS engine, new Web APIs and more Unity announces a new automotive division and two-day Unity AutoTech Summit SteamVR introduces new controllers for game developers, the SteamVR Input system Amazon open sources Amazon Sumerian, its popular AR/VR app toolkit Introducing Fuse Open, Fuse App Engine and apps-as-a-service Barracuda announces Cloud-Delivered Web Application Firewall service Cloud & networking news in depth Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Introducing Intel’s OpenVINO computer vision toolkit for edge computing Rackspace now supports Kubernetes-as-a-Service Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider Other News Google employees quit over company’s continued AI ties with the Pentagon Twitter’s disdain for third-party clients gets real Tutorials Data tutorials Building a Microsoft Power BI Data Model How to Build TensorFlow Models for Mobile and Embedded devices What does the structure of a data mining architecture look like? Getting started with Google Data Studio: An intuitive tool for visualizing BigQuery Data Development & programming tutorials Development tutorials Building a chat app with Kotlin using Node.js Is your web design responsive? How to use arrays, lists, and dictionaries in Unity for 3D game development How to create a generic reusable section for a single page based website How to use Bootstrap grid system for responsive website design? Programming tutorials How to work with classes in Typescript That ’70s language: AWK programming Regular expressions in AWK programming: What, Why, and How Other tutorials How to secure an Azure Virtual Network Tips and tricks for troubleshooting and flying drones safely This week’s opinions, analysis, and insights Data Insights What can Google Duplex do for businesses? 30 common data science terms explained Development & Programming Insights Top frameworks for building your Progressive Web Apps (PWA) What is a multi-layered software architecture? Other Insights 5 pen testing rules of engagement: What to consider while performing Penetration testing Cognitive IoT: How Artificial Intelligence is remolding Industrial and Consumer IoT What are lightweight Architecture Decision Records? BeyondCorp is transforming enterprise security Polycloud: a better alternative to cloud agnosticism  
Read more
  • 0
  • 0
  • 3461

article-image-regular-expressions-awk-programming
Pavan Ramchandani
18 May 2018
8 min read
Save for later

Regular expressions in AWK programming: What, Why, and How

Pavan Ramchandani
18 May 2018
8 min read
AWK is a pattern-matching language. It searches for a pattern in a file and, upon finding the corresponding match, it performs the file's action on the input line. This pattern could consist of fixed strings or a pattern of text. This variable content or pattern is generally searched with the help of regular expressions. Hence, regular expressions form an important part of AWK programming language. Today we will introduce you to the regular expressions in AWK programming and will get started with string-matching patterns and basic constructs to use with AWK. This article is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. What is a regular expression? A regular expression, or regexpr, is a set of characters used to describe a pattern. A regular expression is generally used to match lines in a file that contain a particular pattern. Many Unix utilities operate on plain text files line by line, such as grep, sed, and awk. Regular expressions search for a pattern on a single line in a file. A regular expression doesn't search for a pattern that begins on one line and ends on another. Other programming languages may support this, notably Perl. Why use regular expressions? Generally, all editors have the ability to perform search-and-replace operations. Some editors can only search for patterns, others can also replace them, and others can also print the line containing that pattern. A regular expression goes many steps beyond this simple search, replace, and printing functionality, and hence it is more powerful and flexible. We can search for a word of a certain size, such as a word that has four characters or numbers. We can search for a word that ends with a particular character, let's say e. You can search for phone numbers, email IDs, and so on, and can also perform validation using regular expressions. They simplify complex pattern-matching tasks and hence form an important part of AWK programming. Other regular expression variations also exist, notably those for Perl. Using regular expressions with AWK There are mainly two types of regular expressions in Linux: Basic regular expressions that are used by vi, sed, grep, and so on Extended regular expressions that are used by awk, nawk, gawk, and egrep Here, we will refer to extended regular expressions as regular expressions in the context of AWK. In AWK, regular expressions are enclosed in forward slashes, '/', (forming the AWK pattern) and match every input record whose text belongs to that set. The simplest regular expression is a string of letters, numbers, or both that matches itself. For example, here we use the ly regular expression string to print all lines that contain the ly pattern in them. We just need to enclose the regular expression in forward slashes in AWK: $ awk '/ly/' emp.dat The output on execution of this code is as follows: Billy Chabra 9911664321 bily@yahoo.com M lgs 1900 Emily Kaur 8826175812 emily@gmail.com F Ops 2100 In this example, the /ly/ pattern matches when the current input line contains the ly sub-string, either as ly itself or as some part of a bigger word, such as Billy or Emily, and prints the corresponding line. Regular expressions as string-matching patterns with AWK Regular expressions are used as string-matching patterns with AWK in the following three ways. We use the '~' and '! ~' match operators to perform regular expression comparisons: /regexpr/: This matches when the current input line contains a sub-string matched by regexpr. It is the most basic regular expression, which matches itself as a string or sub-string. For example, /mail/ matches only when the current input line contains the mail string as a string, a sub-string, or both. So, we will get lines with Gmail as well as Hotmail in the email ID field of the employee database as follows: $ awk '/mail/' emp.dat The output on execution of this code is as follows: Jack Singh 9857532312 jack@gmail.com M hr 2000 Jane Kaur 9837432312 jane@gmail.com F hr 1800 Eva Chabra 8827232115 eva@gmail.com F lgs 2100 Ana Khanna 9856422312 anak@hotmail.com F Ops 2700 Victor Sharma 8826567898 vics@hotmail.com M Ops 2500 John Kapur 9911556789 john@gmail.com M hr 2200 Sam khanna 8856345512 sam@hotmail.com F lgs 2300 Emily Kaur 8826175812 emily@gmail.com F Ops 2100 Amy Sharma 9857536898 amys@hotmail.com F Ops 2500 In this example, we do not specify any expression, hence it automatically matches a whole line, as follows: $ awk '$0 ~ /mail/' emp.dat The output on execution of this code is as follows: Jack Singh 9857532312 jack@gmail.com M hr 2000 Jane Kaur 9837432312 jane@gmail.com F hr 1800 Eva Chabra 8827232115 eva@gmail.com F lgs 2100 Ana Khanna 9856422312 anak@hotmail.com F Ops 2700 Victor Sharma 8826567898 vics@hotmail.com M Ops 2500 John Kapur 9911556789 john@gmail.com M hr 2200 Sam khanna 8856345512 sam@hotmail.com F lgs 2300 Emily Kaur 8826175812 emily@gmail.com F Ops 2100 Amy Sharma 9857536898 amys@hotmail.com F Ops 2500 expression ~ /regexpr /: This matches if the string value of the expression contains a sub-string matched by regexpr. Generally, this left-hand operand of the matching operator is a field. For example, in the following command, we print all the lines in which the value in the second field contains a /Singh/ string: $ awk '$2 ~ /Singh/{ print }' emp.dat We can also use the expression as follows: $ awk '{ if($2 ~ /Singh/) print}' emp.dat The output on execution of the preceding code is as follows: Jack Singh 9857532312 jack@gmail.com M hr 2000 Hari Singh 8827255666 hari@yahoo.com M Ops 2350 Ginny Singh 9857123466 ginny@yahoo.com F hr 2250 Vina Singh 8811776612 vina@yahoo.com F lgs 2300 expression !~ /regexpr /: This matches if the string value of the expression does not contain a sub-string matched by regexpr. Generally, this expression is also a field variable. For example, in the following example, we print all the lines that don't contain the Singh sub-string in the second field, as follows: $ awk '$2 !~ /Singh/{ print }' emp.dat The output on execution of the preceding code is as follows: Jane Kaur 9837432312 jane@gmail.com F hr 1800 Eva Chabra 8827232115 eva@gmail.com F lgs 2100 Amit Sharma 9911887766 amit@yahoo.com M lgs 2350 Julie Kapur 8826234556 julie@yahoo.com F Ops 2500 Ana Khanna 9856422312 anak@hotmail.com F Ops 2700 Victor Sharma 8826567898 vics@hotmail.com M Ops 2500 John Kapur 9911556789 john@gmail.com M hr 2200 Billy Chabra 9911664321 bily@yahoo.com M lgs 1900 Sam khanna 8856345512 sam@hotmail.com F lgs 2300 Emily Kaur 8826175812 emily@gmail.com F Ops 2100 Amy Sharma 9857536898 amys@hotmail.com F Ops 2500 Any expression may be used in place of /regexpr/ in the context of ~; and !~. The expression here could also be if, while, for, and do statements. Basic regular expression construct Regular expressions are made up of two types of characters: normal text characters, called literals, and special characters, such as the asterisk (*, +, ?, .), called metacharacters. There are times when you want to match a metacharacter as a literal character. In such cases, we prefix that metacharacter with a backslash (), which is called an escape sequence. The basic regular expression construct can be summarized as follows: Here is the list of metacharacters, also known as special characters, that are used in building regular expressions:     ^    $    .    [    ]    |    (    )    *    +    ? The following table lists the remaining elements that are used in building a basic regular expression, apart from the metacharacters mentioned before: Literal A literal character (non-metacharacter ), such as A, that matches itself. Escape sequence An escape sequence that matches a special symbol: for example t matches tab. Quoted metacharacter () In quoted metacharacters, we prefix metacharacter with a backslash, such as $ that matches the metacharacter literally. Anchor (^) Matches the beginning of a string. Anchor ($) Matches the end of a string. Dot (.) Matches any single character. Character classes (...) A character class [ABC] matches any one of the A, B, or C characters. Character classes may include abbreviations, such as [A-Za-z]. They match any single letter. Complemented character classes Complemented character classes [^0-9] match any character except a digit. These operators combine regular expressions into larger ones: Alternation (|) A|B matches A or B. Concatenation AB matches A immediately followed by B. Closure (*) A* matches zero or more As. Positive closure (+) A+ matches one or more As. Zero or one (?) A? matches the null string or A. Parentheses () Used for grouping regular expressions and back-referencing. Like regular expressions, (r) can be accessed using n digit in future. Do check out the book Learning AWK Programming to learn more about the intricacies of AWK programming language for text processing. Read More What is the difference between functional and object-oriented programming? What makes a programming language simple or complex?
Read more
  • 0
  • 0
  • 30605
article-image-bootstrap-grid-system-responsive-website
Savia Lobo
18 May 2018
12 min read
Save for later

How to use Bootstrap grid system for responsive website design?

Savia Lobo
18 May 2018
12 min read
Bootstrap Origins In 2011, Bootstrap was created by two Twitter employees (Mark Otto and Jacob Thornton) to address the issue of fragmentation of internal tools/platforms. Bootstrap aimed to provide consistency among different web applications that were internally developed at Twitter to reduce redundancy and increase adaptability and reusability. As digital creators, we should always aim to make our applications adaptable and reusable. This will help keep coherency between applications and speed up processes, as we won't need to create basic foundations over and over again. In today's tutorial, you will learn what a Bootstrap is, how it relates to Responsive Web Design and its importance to the web industry. When Twitter Blueprint was born, it provided a way to document and share common design patterns/assets within Twitter. This alone is an amazing feature that would make Bootstrap an extremely useful framework. With this more internal developers began contributing towards the Bootstrap project as part of Hackathon week, and the project just exploded. Not long after, it was renamed "Bootstrap" as we know and love it today, and was released as an open source project to the community. A core team led by Mark and Jacob along with a passionate and growing community of developers helped to accelerate the growth of Bootstrap. In early 2012 after a lot of contributions from the core team and the community, Bootstrap 2 was born. It had come a long way from being a framework for providing internal consistency among Twitter tools. It was now a responsive framework using a 12-column grid system. It also provided inbuilt support for Glyphicons and a plethora of other new components. In 2013, Bootstrap 3 was released with a mobile-first approach to design and a fully redesigned set of components using the immensely popular flat design. This is the version many websites use today and it is very suitable for most developers. Bootstrap 4 is the latest stable  release. This article is an excerpt taken from the book,' Responsive Web Design by Example', written by Frahaan Hussain. Why use Bootstrap? You probably have a reasonable idea of why you would use Bootstrap for developing websites after reading its history, but there is more to it. Simply put, it provides the following: A responsive grid, using the design philosophies. Cross browser compatibility, using Normalize.css to ensure elements render consistently across all browsers (which isn't a very easy task). You might be wondering why it's difficult. Simply put, there are several different browsers, each with a plethora of versions, which all render content differently. I've seen some browsers put a border around an image by default, whereas some browsers don't. This type of inconsistency will prove to be very bad for user experience. A plethora of UI components, by providing polished UI components as developers, we are going to bring our creativity to life in a much easier way. These components usually allow a team to increase their development velocity since they start from a solid tried and tested foundation. They not only provide good design, but they are usually implemented using best practices in terms of performance and accessibility. A very compact size with only a small footprint. Really fast to develop with, it doesn't get in the way like many other frameworks, but allows your creativity to shine through. Extremely easy to start using Bootstrap in your website. Bundles common JavaScript plugins such as jQuery. Excellent documentation. Customizable, allowing you to remove any unnecessary features. An amazing community that is always ready, 24/7, to help. It's pretty clear now that Bootstrap is an amazing framework and that it will help provide consistency among our projects and aid cross browser responsive design. But why use Bootstrap over other frameworks? There are endless responsive frameworks like Bootstrap out there, such as Foundation, W3.CSS, and Skeleton, to mention a few. Bootstrap, however, was one of the first responsive frameworks and is by far the most developed with an ever-growing community. It has documentation online, both official and unofficial, and other frameworks aren't able to boast about their resources as much as Bootstrap can. Constantly being updated, it makes it the right choice for any website developer. Also, most JavaScript frameworks, such as Angular and React, have bindings to Bootstrap components that will reduce the amount of code and time spent binding it with another framework. It can also be used with tools such as SASS to customize  the components provided further. Bootstrap's grid system First, let's cover what a grid system is in general, regardless of the framework you choose to develop your amazing website on top of. Without using a framework, CSS would be used to implement the grid. However, a framework like Bootstrap handles all of the CSS side and provides us with easy-to-use classes. A responsive grid system is composed of two main elements: Columns: These are the horizontal containers for storing content on a single row Rows: These are top level containers for storing columns Your website will have at least one row, but it can have more. Each row can contain containers that span a set number of columns. For example, if the grid system had 100 columns, then a container that spans 50 would be half the width of the browser and/or parent element. Basics of Bootstrap Bootstrap's grid system consists of 12 columns that can be used to display content. Bootstrap also uses containers (methods for storing the website's content), rows, and columns to aid in the layout and alignment of the web page's content. All of these employ HTML classes for usage and will be explained very shortly. The purpose of these are as follows: Columns are used to group snippets of the website's content, and they, in turn, allow manipulation without disrupting the internal content's flow. There are two different types of columns: .container: Used for a fixed width, which is set by Bootstrap .container fluid: Used for full width to span the entire browser Rows are used to horizontally group columns, which aids in lining up the site's content properly: .row: There is only one type of row Columns mentioned previously are a way of setting how wide content should be. The following are the classes used for columns: .col-xs: Designed to display the content only on extra-small screens Max container width—none Triggered when the browser width is below 576px .col-sm: Designed to display the content only on small screens Max container width—540px Triggered when the browser width is above or equal to 576px and below 768px .col-md: Designed to display the content only on medium screens Max container width—720px Triggered when the browser width is above or equal to 768 and below 992px .col-lg: Designed to display the content only on large screens Max container width—960px Triggered when the browser width is above or equal to 992px and below 1200px .col-xl: Designed to display the content only on extra-large screens Max container width—1140px Triggered when the browser width is above or equal to 1200px .col: Designed to be triggered on all screen sizes To set a column's width, we simply append an integer ranging from 1 to 12 at the end of the class, like so: .col-6: Spans six columns on all screen sizes .col-md-6: Spans six columns only on extra-small screen sizes Later in this chapter, we will run through some examples of how to use these features and how they work together. Usage and examples To use the aforementioned features, the structure is as follows: div with container class div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content The following examples may have some CSS styling applied; this does not affect their usage. Equal width columns example We will start off with a simple example that consists of one row and three equal columns on all screen sizes. The following code produces the aforementioned result: You may be scratching your head in regards to the column classes, as they have no numbers appended. This is an amazing feature that will come in useful very often. It allows us, as web developers, to add columns easily, without having to update the numbers, if the width of the columns is equal. In this example, there are three columns, which means the three divs equally span their thirds of the row. Multi-row, equal-width columns example Now let's extend the previous example to multiple rows: The following code produces the aforementioned result: As you can see, by adding a new row, the columns automatically go to the next row. This is extremely useful for grouping content together. Multi-row, equal-width columns without multiple rows example The title of this example may seem confusing, but you need to read it correctly. We will now cover creating multiple rows using only a single row class. This can be achieved with the help of a display utility class called w-100. The following code produces the aforementioned result: The example shows multiple row divs are not required for multiple rows. But the result isn't exactly identical, as there is no gap between the rows. This is useful for separating content that is still similar. For example, on a social network, it is common to have posts, and each post will contain information such as its date, title, description, and so on. Each post could be its own row, but within the post, the individual pieces of information could be separated using this class. Differently sized columns Up until now, we have only created rows with equal-width columns. These are useful, but not as useful as being able to set individual sizes. As mentioned in the Bootstrap grid system section, we can easily change the column width by appending a number ranging from 1-12 at the end of the col class. The following code produces the aforementioned result: As you can see, setting the explicit width of a column is very easy, but this applies the width to all screen sizes. You may want it only to be applied on certain screen sizes. The next section will cover this. Differently sized columns with screen size restrictions Let's use the previous example and expand it to change size responsively on differently sized screens. On extra-large screens, the grid will look like the following: On all other screen sizes it will appear with equal-width columns: The following code produces the aforementioned result: Now we are beginning to use breakpoints that provide a way of creating multiple layouts with minimal extra code to make use of the available real estate fully. Mixing and matching We aren't restricted to choosing only one break-point, we are able to set breakpoints for all the available screen sizes. The following figures illustrate all screen sizes, from extra-small to extra-large: Extra-small: Small: Medium: Large: Extra-large: The following code produces the aforementioned results: It isn't necessary for all divs to have the same breakpoints or to have breakpoints at all. Vertical alignment The previous examples provide functionality for use cases, but sometimes the need may arise to align objects vertically. This could technically be done with empty divs, but this wouldn't be a very elegant solution. Instead, there are alignment classes to help with this as can be seen here: As we can see, you can align rows vertically in one of three positions. The following code produces the aforementioned result: We aren't restricted to only aligning rows, we can easily align columns relative to each other, as is demonstrated here: The following code produces the aforementioned result: Horizontal alignment As we vertically aligned content in the previous section, we will now cover how easy it is to align content horizontally. The following figures show the results of horizontal alignment:   The following code produces the aforementioned result: Column offsetting The need may arise to position content with a slight offset. If the content isn't centered or at the start or end, this can become problematic, but using column offsetting, we can overcome this issue. Simply add an offset class, with the screen size to target, and how many columns (1-12) the content should be offset by, as can be seen in the following example:   The following code produces the aforementioned result: Grid wrap up The examples covered so far will suffice for most websites. There are more techniques for manipulating the grid, which can be found on Bootstrap's website. If you tried any of the examples, you may have noticed cascading from smaller screen-size classes to larger screen-size classes. This occurs when there are no explicit classes set for a certain screen size. Bootstrap components There are plethora of amazing components that are provided with Bootstrap, thus saving time creating them from scratch. There are components for dropdowns, buttons, images, and so much more. The usage is very similar to that of the grid system, and the same HTML elements we know and love are used with CSS classes to modify and display Bootstrap constructs. I won't go over every component that Bootstrap offers as that would require an encyclopedia in itself, and many of the commonly used ones will be covered in future chapters through example projects. I would however recommend taking a look at some of the components on Bootstrap's website. If you have found this post useful, do check out this book, ' Responsive Web Design by Example' to build engaging responsive websites using frameworks like Bootstrap and upgrade your skills as a web designer. Get ready for Bootstrap v4.1; Web developers to strap up their boots Web Development with React and Bootstrap Bootstrap 4 Objects, Components, Flexbox, and Layout  
Read more
  • 0
  • 2
  • 45262

article-image-secure-azure-virtual-network
Gebin George
18 May 2018
7 min read
Save for later

How to secure an Azure Virtual Network

Gebin George
18 May 2018
7 min read
The most common question that anyone asks when they buy a service is, can it be secured? The answer to that question, in this case, is absolutely yes. In this tutorial, we will learn to secure your connection between virtual machines. On top of the security, Microsoft provides for Azure as a vendor, there are some configurations that you can do at your end to increase the level of security to your virtual network. For a higher level of security, you can use the following: NSG: It is like a firewall that controls the inbound and outbound traffic by specifying which traffic is allowed to flow to/from the NIC/subnet Distributed denial of service (DDoS) protection: It is used to prevent DDoS attacks and at the time of writing is in preview This tutorial is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly.  Network Security Groups (NSG) NSG controls the flow of traffic by specifying which traffic is allowed to enter or exit the network. Creating NSG Creating NSG is a pretty straightforward process. To do it, you need to follow these steps: Navigate to Azure portal, and search for network security groups, as shown in the following screenshot: Figure 2.13: Searching for network security groups Once you have clicked on it, a new blade will be opened wherein all the created NSGs are located, as shown in the following screenshot: Figure 2.12: Network security groups blade Click on Add and a new blade will pop up, where you have to specify the following: Name: The name of the NSG Subscription: The subscription, which will be charged for NSG usage Resource group: The resource group within which the NSG will be located as a resource Location: The region where this resource will be created Figure 2.13: Creating an NSG Once you have clicked on Create, the NSG will be created within seconds. Inbound security rules By default, all the subnets and NICs that are not associated with NSG have all the inbound traffic allowed and once they are associated with an NSG, the following inbound security rules are assigned to them as they are a default part of any NSG: AllowVnetInBound: Allows all the inbound traffic that comes from a virtual network AllowAzureLoadBalancerInBound: Allows all the inbound traffic that comes from Load Balancer DenyAllInbound: Denies all the inbound traffic that comes from any source Figure 2.14: Default inbound security rules As shown in the previous screenshot, the rule consists of some properties, such as PRIORITY, NAME, PORT, and so on. It is important to understand what these properties mean for a better understanding of security rules. So, let's go ahead and explain them: PRIORITY: A number assigned to each rule to specify which rule has a higher priority than the other. The lower the number, the higher the priority. You can specify a priority with any number between 100 and 4096. NAME: The name of the rule. The same name cannot be reused within the same network security group. PORT: The allowed port through which the traffic will flow to the network. PROTOCOL: Specify whether the protocol you are using is TCP or UDP. SOURCE and DESTINATION: The source can be any, an IP address range, or a service tag. You can remove the default rules by clicking on Default rules. You can customize your own inbound rules, by following these steps: On the Inbound security rules blade, click on Add. A new blade will pop up, where you have to specify the following: Source: The source can be Any, an IP address range, or a service tag. It specifies the incoming traffic from a specific source IP address range that will be allowed or denied by this rule. Source port ranges: You can provide a single port, such as 80, a port range, such as 1024 - 65535, or a comma-separated list of single ports and/or port ranges, such as 80, 1024 - 65535. This specifies on which ports traffic will be allowed or denied by this rule. Provide an asterisk (*) to allow traffic on any port. Destination: The destination can be Any, an IP address range, or a virtual network. It specifies the outing traffic to a specific destination IP address range that will be allowed or denied by this rule. Destination port ranges: What applies for the source port ranges, applies for the destination port ranges. Protocol: It can be Any, TCP, or UDP. Action: Whether to Allow the rule or to Deny it. Priority: As mentioned earlier, the lower the number, the higher the priority. The priority number must be between 100 - 4096. Name: The name of the rule. Description: The description of the rule, which will help you to differentiate between the rules. In our scenario, I want to allow all the incoming connections to access a website published on a web server located in a virtual network, as shown in the following screenshot: Figure 2.15: Creating an inbound security rule Once you click on OK, the rule will be created. Outbound security rules Outbound security rules are no different than inbound security rules, except inbound rules are meant for inbound traffic and outbound rules are meant for outbound traffic. Otherwise, everything else is similar. Associating the NSG Once you have the NSG created, you can associate it to either an NIC or a subnet. Associating the NSG to an NIC To associate the NSG to an NIC, you need to follow these steps: Navigate to the Network security groups that you have created and then select Network interfaces, as shown in the following screenshot: Figure 2.16: Associated NICs to an NSG Click on Associate. A new blade will pop up, from which you need to select the NIC that you want to associate with the NSG, as shown in the following screenshot: Figure 2.17: NICs to be associated to the NSG Voila! You are done. Associating the NSG to a subnet To associate the NSG to a subnet, you need to follow these steps: Navigate to the Network security groups that you have created and then select Subnets, as shown in the following screenshot: Figure 2.18: Associated subnets to an NSG Click on Associate. A new blade will pop up, where you have to specify the virtual network within which the subnet exists, as shown in the following screenshot: Figure 2.19: Choosing the VNet within which the subnet exists Then, you need to specify which subnet of the VNet you want to associate the NSG to, as shown in the following screenshot: Figure 2.20: Selecting the subnet to which the NSG will be associated Once the subnet is selected, click on OK, and it will take some seconds to get it associated to the NSG. Azure DDoS protection DDoS attacks have spread out lately, by exhausting the application and making it unavailable for use, and you can expect an attack of that type any time. Recently, Microsoft announced the support of Azure DDoS protection as a service for protecting Azure resources, such as Azure VMs, Load Balancers, and Application Gateways. Azure DDoS protection comes in two flavors: Basic: This type has been around for a while as it is already enabled for Azure services to mitigate DDoS attacks. It incurs no charges. Standard: This flavor comes with more enhancements that mitigate attacks, especially for Azure VNet. At the time of writing this book, Azure DDoS protection standard is in preview and it is not available at the portal. You need to request it by filling out a form that is available here. If you found this post useful, do check out the book Hands on Networking with Azure, to design and implement Azure Networking for Azure VMs. Read More The Microsoft Azure Stack Architecture What is Azure API Management?
Read more
  • 0
  • 0
  • 28985

article-image-what-is-multi-layered-software-architecture
Packt Editorial Staff
17 May 2018
7 min read
Save for later

What is a multi layered software architecture?

Packt Editorial Staff
17 May 2018
7 min read
Multi layered software architecture is one of the most popular architectural patterns today. It moderates the increasing complexity of modern applications. It also makes it easier to work in a more agile manner. That's important when you consider the dominance of DevOps and other similar methodologies today. Sometimes called tiered architecture, or n-tier architecture, a multi layered software architecture consists of various layers, each of which corresponds to a different service or integration. Because each layer is separate, making changes to each layer is easier than having to tackle the entire architecture. Let's take a look at how a multi layered software architecture works, and what the advantages and disadvantages of it are. This has been taken from the book Architectural Patterns. Find it here. What does a layered software architecture consist of? Before we get into a multi layered architecture, let's start with the simplest form of layered architecture - three tiered architecture. This is a good place to start because all layered software architecture contains these three elements. These are the foundations: Presentation layer: This is the first and topmost layer which is present in the application. This tier provides presentation services, that is presentation, of content to the end user through GUI. This tier can be accessed through any type of client device like desktop, laptop, tablet, mobile, thin client, and so on. For the content to the displayed to the user, the relevant web pages should be fetched by the web browser or other presentation component which is running in the client device. To present the content, it is essential for this tier to interact with the other tiers that are present preceding it. Application layer: This is the middle tier of this architecture. This is the tier in which the business logic of the application runs. Business logic is the set of rules that are required for running the application as per the guidelines laid down by the organization. The components of this tier typically run on one or more application servers. Data layer: This is the lowest tier of this architecture and is mainly concerned with the storage and retrieval of application data. The application data is typically stored in a database server, file server, or any other device or media that supports data access logic and provides the necessary steps to ensure that only the data is exposed without providing any access to the data storage and retrieval mechanisms. This is done by the data tier by providing an API to the application tier. The provision of this API ensures complete transparency to the data operations which are done in this tier without affecting the application tier. For example, updates or upgrades to the systems in this tier do not affect the application tier of this architecture. The diagram below shows how a simple layered architecture with 3 tiers works:   These three layers are essential. But other layers can be built on top of them. That's when we get into multi layered architecture. It's sometimes called n-tiered architecture because the number of tiers or layers (n) could be anything! It depends on what you need and how much complexity you're able to handle. Multi layered software architecture A multi layered software architecture still has the presentation layer and data layer. It simply splits up and expands the application layer. These additional aspects within the application layer are essentially different services. This means your software should now be more scalable and have extra dimensions of functionality. Of course, the distribution of application code and functions among the various tiers will vary from one architectural design to another, but the concept remains the same. The diagram below illustrates what a multi layered software architecture looks like. As you can see, it's a little more complex that a three-tiered architecture, but it does increase scalability quite significantly: What are the benefits of a layered software architecture? A layered software architecture has a number of benefits - that's why it has become such a popular architectural pattern in recent years. Most importantly, tiered segregation allows you to manage and maintain each layer accordingly. In theory it should greatly simplify the way you manage your software infrastructure. The multi layered approach is particularly good for developing web-scale, production-grade, and cloud-hosted applications very quickly and relatively risk-free. It also makes it easier to update any legacy systems - when you're architecture is broken up into multiple layers, the changes that need to be made should be simpler and less extensive than they might otherwise have to be. When should you use a multi layered software architecture? Clearly, the argument for a multi layered software architecture is pretty clear. However, there are some instances when it is particularly appropriate: If you are building a system in which it is possible to split the application logic into smaller components that could be spread across several servers. This could lead to the design of multiple tiers in the application tier. If the system under consideration requires faster network communications, high reliability, and great performance, then n-tier has the capability to provide that as this architectural pattern is designed to reduce the overhead which is caused by network traffic. An example of a multi layered software architecture We can illustrate the working of an multi layered architecture with the help of an example of a shopping cart web application which is present in all e-commerce sites. The shopping cart web application is used by the e-commerce site user to complete the purchase of items through the e-commerce site. You'd expect the application to have several features that allow the user to: Add selected items to the cart Change the quantity of items in their cart Make payments The client tier, which is present in the shopping cart application, interacts with the end user through a GUI. The client tier also interacts with the application that runs in the application servers present in multiple tiers. Since the shopping cart is a web application, the client tier contains the web browser. The presentation tier present in the shopping cart application displays information related to the services like browsing merchandise, buying them, adding them to the shopping cart, and so on. The presentation tier communicates with other tiers by sending results to the client tier and all other tiers which are present in the network. The presentation tier also makes calls to database stored procedures and web services. All these activities are done with the objective of providing a quick response time to the end user. The presentation tier plays a vital role by acting as a glue which binds the entire shopping cart application together by allowing the functions present in different tiers to communicate with each other and display the outputs to the end user through the web browser. In this multi layered architecture, the business logic which is required for processing activities like calculation of shipping cost and so on are pulled from the application tier to the presentation tier. The application tier also acts as the integration layer and allows the applications to communicate seamlessly with both the data tier and the presentation tier. The last tier which is the data tier is used to maintain data. This layer typically contains database servers. This layer maintains data independent from the application server and the business logic. This approach provides enhanced scalability and performance to the data tier. Read next Microservices and Service Oriented Architecture What is serverless architecture and why should I be interested?
Read more
  • 0
  • 1
  • 107301
article-image-generic-section-single-page-based-website
Savia Lobo
17 May 2018
14 min read
Save for later

How to create a generic reusable section for a single page based website

Savia Lobo
17 May 2018
14 min read
There are countless variations when it comes to different sections that can be incorporated into the design of a single page website. In this tutorial, we will cover how to create a generic section that can be extended to multiple sections. This section provides the ability to display any information your website needs. Single page sections are commonly used to display the following data to the user: Contact form (will be implemented in the next chapter). About us: This can be as simple as a couple of paragraphs talking about the company/individual or more complex with images, even showing the team and their roles. Projects/work: Any work you or the company has done and would like to showcase. They are usually linked to external pages or pop up boxes containing more information about the project. Useful company info such as opening times. These are just some of the many uses for sections in a single page website. A good rule of thumb is that if it can be a page on another website it can most likely be adapted into sections on a single page website. Also, depending on the amount of information a single section has, it could potentially be split into multiple sections. This article is an excerpt taken from the book,' Responsive Web Design by Example', written by Frahaan Hussain. Single page section examples Let's go through some examples of the sections mentioned above. Example 1: Contact form As can be seen, by the contact form from Richman, the elements used are very similar to that of a contact page. A form is used with inputs for the various pieces of information required from the user along with a button for submission: Not all contact forms will have the same fields. Put what you need, it may be more or less, there is no right or wrong answer. Also at the bottom of the section is the company's logo along with some written contact information, which is also very common. Some websites also display a map usually using the Google Maps API; these mainly have a physical presence such as a store. Website link—http://richman-kcm.com/ Example 2: About us This is an excellent example of an about us page that uses the following elements to convey the information: Images: Display the individual's face. Creates a very personal touch to the otherwise digital website. Title: Used to display the individual's name. This can also be an image if you want a fancier title. Simple text: Talks about who the person is and what they do. Icons: Linking to the individual's social media accounts. Website link—http://designedbyfew.com/ Example 3: Projects/work This website shows its work off very elegantly and cleanly using images and little text: It also provides a carousel-like slider to display the work, which is extremely useful for displaying the content bigger without displaying all of it at once and it allows a lot of content for a small section to be used. Website link: http://peeltheorange.com/#recent-work Example 4: Opening times This website uses a background image similar to the introduction section created in the previous chapter and an additional image on top to display the opening times. This can also be achieved using a mixture of text and CSS styling for various facets such as the border. Website link—http://www.mumbaigate.co.uk/ Implementing generic reusable single page section We will now create a generic section that can easily be modified and reused to our single page portfolio website. But we still need some sort of layout/design in mind before we implement the section, let's go with an Our Team style section. What will the Our Team section contain? The Our Team section will be a bit simpler, but it can easily be modified to accommodate the animations and styles displayed on the previously mentioned websites. It will be similar to the following example: Website link—http://demo.themeum.com/html/oxygen/ The preceding example consists of the following elements: Heading Intro text (Lorem Ipsum in this case) Images displaying each member of the team Team member's name Their role Text informing the viewer a little bit about them Social links We will also create our section using a similar layout. We are now finally going to use the column system to its full potential to provide a responsive experience using breakpoints. Creating the Our Team section container First let's implement a simple container, with the title and section introduction text, without any extra elements such as an image. We will then use this to link to our navigation bar. Add the following code to the jumbotron div: Let's go over what the preceding code is doing: Line 9 creates a container that is fluid, allowing it to span the browser's width fully. This can be changed to a regular container if you like. The id will be used very soon to link to the navigation bar. Line 10 creates a row in which our text elements will be stored. Line 11 creates a div that spans all the 12 columns on all screen sizes and centers the text inside of it. Line 12 creates a simple header for the Team section. Line 14 to Line 16 adds introduction text. I have put the first two sentences of "Lorem Ipsum..." inside of it, but you can put anything you like. All of this produces the following result: Anchoring the Team section to the navigation bar We will now link the navigation bar to the Team section. This will allow the user to navigate to the Team section without having to scroll up or down. At the moment, there is no need to scroll up, but when more content is added this can become a problem as a single page website can become quite long. Fortunately, we have already done the heavy lifting with the navigation bar through HTML and JavaScript, phew! First, let's change the name of the second button in the navigation bar to Team. Update the navigation bar like so: The navigation bar will now look as follows: Fantastic, our navigation bar is looking more like what you would see on a real website. Now let's change href to the same ID as the Team section, which was #TeamSection like so: Now when we click on any of the navigation buttons we get no JavaScript errors like we would have in the previous chapter. Also, it automatically scrolls to each section without any extra JavaScript code. Adding team pictures Now let's use images to showcase the team members. I will use the image from the following link for our employees, but in a real website you would obviously use different images: http://res.cloudinary.com/dmliyxggm/image/upload/v1511699813/John_vepwoz.png I have modified the image so all the background is removed and the image is trimmed, so it looks as follows: Up until now, all images that we have used have been stored on other websites such as CDN's, this is great, but the need may arise when the use of a custom image like the previous one is needed. We can either store it on a CDN, which is a very good approach, and I would recommend Cloudinary (http://cloudinary.com/), or we can store it locally, which we will do now. A CDN is a Content Delivery Network that has a sole purpose of delivering content such as images to other websites using the best and fastest servers available to a specific user. I would definitely recommend using one. Create a folder called Images and place the image using the following folder structure: Root CSS Images Team Thumbnails Thumbnails.png Index.php JS SNIPPETS This may seem like overkill, considering we only have one image, but as your website gets more complex you will store more images and having an intelligent folder structure/hierarchy will save an immense amount of time. Add the following code to the first row like so: The code we have added doesn't actually provide any visual changes as it is nothing but empty div classes. But these div classes will serve as structures for each team member and their respective content such as name and social links. We created a new row to group our new div classes. Inside each div we will represent each team member. The classes have been set up to be displayed like so: Extra small screens will only show a single team member on a single row Small and medium screens will show two team members on a single row Large and extra large screens will show four team members on a single row The rows are rows in their literal sense and not the class row. Another way to look at them is as lines. The sizes/breakpoints can easily be changed using the information regarding the grid from this article What Is Bootstrap, Why Do We Use It? Now let's add the team's images, update the previous code like so: The preceding code produces the following result: As you can see, this is not the desired effect we were looking for. As there are no size restrictions on the image, it is displayed at its original size. Which, on some screens, will produce a result similar to the monstrosity you saw before; worry not, this can easily be fixed. Add the classes img-fluid and img-thumbnail to each one of the images like so: The classes we added are designed to provide the following styling: img-fluid: Provides a responsive image that is automatically restricted based on the number of columns and browser size. img-thumbnail: Is more of an optional class, but it is still very useful. It provides a light border around the images to make them pop. This produces the following result: As it can be seen, this is significantly better than our previous result. Depending on the browser/screen size, the positioning will slightly change based on the column breakpoints we specified. As usual, I recommend that you resize the browser to see the different layouts. These images are almost complete; they look fine on most screen sizes, but they aren't actually centered within their respective div. This is evident in larger screen sizes, as can be seen here: It isn't very noticeable, but the problem is there, it can be seen to the right of the last image. You probably could get away without fixing this, but when creating anything, from a website to a game, or even a table, the smallest details are what separate the good websites from the amazing websites. This is a simple idea called the aggregation of marginal gains. Fortunately for us, like many times before, Bootstrap offers functionality to resolve our little problem. Simply add the text-center class, to the row within the div of the images like so:   This now produces the following result: There is one more slight problem that is only noticeable on smaller screens when the images/member containers are stacked on top of each other. The following result is produced: The problem might not jump out at first glance, but look closely at the gaps between the images that are stacked, or I should say, to the lack of a gap. This isn't the end of the world, but again the small details make an immense difference to the look of a website. This can be easily fixed by adding padding to each team member div. First, add a class of teamMemberContainer to each team member div like so: Add the following CSS code to the index.css file to provide a more visible gap through the use of padding: This simple solution now produces the following result: If you want the gap to be bigger, simply increase the value and lower it to reduce the gap. Team member info text The previous section covered quite a lot if you're not 100% on what we did just go back and take a second look. This section will thankfully be very simple as it will incorporate techniques and features we have already covered, to add the following information to each team member: Name Job title Member info text Plus anything else you need Update each team member container with the following code: Let's go over the new code line by line: Line 24 adds a simple header that is intended to display the team member's name. I have chosen an h4 tag, but you can use something bigger or smaller if you like. Line 26 adds the team member's job title, I have used a paragraph element with the Bootstrap class text-muted, which lightens the text color. If you would like more information regarding text styling within Bootstrap, feel free to check out the following link. Line 28 adds a simple paragraph with no extra styling to display some information about the team member. Bootstrap text styling link—https://v4-alpha.getbootstrap.com/utilities/colors/ The code that we just added will produce the following result: As usual, resize your browser to simulate different screen sizes. I use Chrome as my main browser, but Safari has an awesome feature baked right in that allows you to see how your website will run on different browsers/devices, this link will help you use this feature—https://www.tekrevue.com/tip/safari-responsive-design-mode/ Most browsers have a plethora of plugins to aid in this process, but not only does Safari have it built in, it works really well. It all looks fantastic, but again I will nitpick at the gaps. The image is right on top of the team member name text; a small gap would really help improve the visual fidelity. Add a class of teamMemberImage to each image tag as it is demonstrated here: Now add the following code to the index.css file, which will apply a margin of 10px below the image, hence moving all the content down:  Change the margin to suit your needs. This very simple code will produce the following similar yet subtly different and more visually appealing result: Team member social links We have almost completed the Team section, only the social links remain for each team member. I will be using simple images for the social buttons from the following link: https://simplesharebuttons.com/html-share-buttons/ I will also only be adding three social icons, but feel free to add as many or as few as you need. Add the following code to the button of each team member container: Let's go over each new line of code: Line 30 creates a div to store all the social buttons for each team member Line 31 creates a link to Facebook (add your social link in the href) Line 32 adds an image to show the Facebook social link Line 35 creates a link to Google+ (add your social link in the href) Line 36 adds an image to show the Google+ social link Line 39 creates a link to Twitter (add your social link in the href) Line 40 adds an image to show the Twitter social link We have added a class that needs to be implemented, but let's first run our website to see the result without any styling: It looks OK, but the social icons are a bit big, especially if we were to have more icons. Add the following CSS styling to the index.css file: This piece of code simply restricts the social icons size to 50px. Only setting the width causes the height to be automatically calculated, this ensures that any changes to the image that involves a ratio change won't mess up the look of the icons. This now produces the following result: Feel free to change the width to suit your desires. With the social buttons implemented, we are done. If you've enjoyed this tutorial, check out Responsive Web Design by Example, to create a cool blog page, beautiful portfolio site, or professional business site to make them all totally responsive. 5 things to consider when developing an eCommerce website What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability
Read more
  • 0
  • 0
  • 23743

article-image-awk-programming-langauge
Pavan Ramchandani
17 May 2018
9 min read
Save for later

That '70s language: AWK programming

Pavan Ramchandani
17 May 2018
9 min read
AWK is an interpreted programming language designed for text processing and report generation. It is typically used for data manipulation, such as searching for items within data, performing arithmetic operations, and restructuring raw data for generating reports in most Unix-like operating systems. Today, we will explore the AWK philosophy and different types of AWK that exist, starting from its original implementation in 1977 at AT&T's Laboratories, Inc. We will also look at the various implementation areas of AWK in data science today. Using AWK programs, one can handle repetitive text-editing problems with very simple and short programs. It is a pattern-action language; it searches for patterns in a given input and, when a match is found, it performs the corresponding action. The pattern can be made of strings, regular expressions, comparison operations on numbers, fields, variables, and so on. It reads the input files and splits each input line of the file into fields automatically. AWK has most of the well-designed features that every programming language should contain. Its syntax particularly resembles that of the C programming language. It is named after its original three authors: Alfred V. Aho Peter J. Weinberger Brian W. Kernighan AWK is a very powerful, elegant, and simple that every person dealing with text processing should be familiar with. This article is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. This book will introduce you to AWK programming language and get you hands-on working with practical implementation of AWK. Types of AWK The AWK language was originally implemented as an AWK utility on Unix. Today, most Linux distributions provide GNU implementation of AWK (GAWK), and a symlink for AWK is created from the original GAWK binary. The AWK utility can be categorized into the following three types, depending upon the type of interpreter it uses for executing AWK programs: AWK: This is the original AWK interpreter available from AT&T Laboratories. However, it is not used much nowadays and hence it might not be well-maintained. Its limitation is that it splits a line into a maximum 99 fields. It was updated and replaced in the mid-1980s with an enhanced version called New AWK (NAWK). NAWK: This is AT&T's latest development on the AWK interpreter. It is well-maintained by one of the original authors of AWK - Dr. Brian W. Kernighan. GAWK: This is the GNU project's implementation of the AWK programming language. All GNU/Linux distributions are shipped with GAWK by default and hence it is the most popular version of AWK. GAWK interpreter is fully compatible with AWK and NAWK. Beyond these, we also have other, less popular, AWK interpreters and translators, mentioned as follows. These variants are useful in operations when you want to translate your AWK program to C, C++, or Perl: MAWK: Michael Brennan interpreter for AWK. TAWK: Thompson Automation interpreter/compiler/Microsoft Windows DLL for AWK. MKSAWK: Mortice Kern Systems interpreter/compiler/for AWK. AWKCC: An AWK translator to C (might not be well-maintained). AWKC++: Brian Kernighan's AWK translator to C++ (experimental). It can be downloaded from: https://9p.io/cm/cs/who/bwk/awkc++.ps. AWK2C: An AWK translator to C. It uses GNU AWK libraries extensively. A2P: An AWK translator to Perl. It comes with Perl. AWKA: Yet another AWK translator to C (comes with the library), based on MAWK. It can be downloaded from: http://awka.sourceforge.net/download.html. When and where to use AWK AWK is simpler than any other utility for text processing and is available as the default on Unix-like operating systems. However, some people might say Perl is a superior choice for text processing, as AWK is functionally a subset of Perl, but the learning curve for Perl is steeper than that of AWK; AWK is simpler than Perl. AWK programs are smaller and hence quicker to execute. Anybody who knows the Linux command line can start writing AWK programs in no time. Here are a few use cases of AWK: Text processing Producing formatted text reports/labels Performing arithmetic operations on fields of a file Performing string operations on different fields of a file Programs written in AWK are smaller than they would be in other higher-level languages for similar text processing operations. AWK programs are interpreted on a GNU/Linux Terminal and thus avoid the compiling, debugging phase of software development in other languages. Getting started with installation This section describes how to set up the AWK environment on your GNU/Linux system, and we'll also discuss the workflow of AWK. Then, we'll look at different methods for executing AWK programs. Installation on Linux Generally, AWK is installed by default on most GNU/Linux distributions. Using the which command, you can check whether it is installed on your system or not. In case AWK is not installed on your system, you can do so in one of two ways: Using the package manager of the corresponding GNU/Linux system Compiling from the source code Let's take a look at each method in detail in the following sections. Using the package manager Different flavors of GNU/Linux distribution have different package-management utilities. If you are using a Debian-based GNU/Linux distribution, such as Ubuntu, Mint, or Debian, then you can install it using the Advance Package Tool (APT) package manager, as follows: [ shiwang@linux ~ ] $ sudo apt-get update -y [ shiwang@linux ~ ] $ sudo apt-get install gawk -y Similarly, to install AWK on an RPM-based GNU/Linux distribution, such as Fedora, CentOS, or RHEL, you can use the Yellowdog Updator Modified (YUM) package manager, as follows: [ root@linux ~ ] # yum update -y [ root@linux ~ ] # yum install gawk -y For installation of AWK on openSUSE, you can use the zypper (zypper command line) package-management utility, as follows: [ root@linux ~ ] # zypper update -y [ root@linux ~ ] # zypper install gawk -y Once the installation is finished, make sure AWK is accessible through the command line. We can check that using the which command, which will return the absolute path of AWK on our system: [ root@linux ~ ] # which awk /usr/bin/awk You can also use awk --version to find the AWK version on our system: [ root@linux ~ ] # awk --version Compiling from the source code Like every other open source utility, the GNU AWK source code is freely available for download as part of the GNU project. Previously, you saw how to install AWK using the package manager; now, you will see how to install AWK by compiling from its source code on the GNU/Linux distribution. The following steps are applicable to most of the GNU/Linux software for installation: Download the source code from a GNU project ftp site. Here, we will use the wget command line utility to download it, however you are free to choose any other program, such as curl, you feel comfortable with: [ shiwang@linux ~ ] $ wget http://ftp.gnu.org/gnu/gawk/gawk-4.1.3.tar.xz Extract the downloaded source code: [ shiwang@linux ~ ] $ tar xvf gawk-4.1.3.tar.xz Change your working directory and execute the configure file to configure the GAWK as per the working environment of your system: [ shiwang@linux ~ ] $ cd gawk-4.1.3 && ./configure Once the configure command completes its execution successfully, it will generate the make file. Now, compile the source code by executing the make command: [ shiwang@linux ~ ] $ make Type make install to install the programs and any data files and documentation. When installing into a prefix owned by root, it is recommended that the package be configured and built as a regular user, and only the make install phase is executed with root privileges: [ shiwang@linux ~ ] $ sudo make install Upon successful execution of these five steps, you have compiled and installed AWK on your GNU/Linux distribution. You can verify this by executing the which awk command in the Terminal or awk --version: [ root@linux ~ ] # which awk /usr/bin/awk Now you have a working AWK/GAWK installation and we are ready to begin AWK programming, but before that, our next section describes the workflow of the AWK interpreter. If you are running on macOS X, AWK, and not GAWK, would be installed as a default on it. For GAWK installation on macOS X, please refer to MacPorts for GAWK. Workflow of AWK Having a basic knowledge of the AWK interpreter workflow will help you to better understand AWK and will result in more efficient AWK program development. Hence, before getting your hands dirty with AWK programming, you need to understand its internals. The AWK workflow can be summarized as shown in the following figure: Let's take a look at each operation: READ OPERATION: AWK reads a line from the input stream (file, pipe, or stdin) and stores it in memory. It works on text input, which can be a file, the standard input stream, or from a pipe, which it further splits into records and fields: Records: An AWK record is a single, continuous data input that AWK works on. Records are bounded by a record separator, whose value is stored in the RS variable. The default value of RS is set to a newline character. So, the lines of input are considered records for the AWK interpreter. Records are read continuously until the end of the input is reached. Figure 1.2  shows how input data is broken into records and then goes further into how it is split into fields: Fields: Each record can further be broken down into individual chunks called fields. Like records, fields are bounded. The default field separator is any amount of whitespace, including tab and space characters. So by default, lines of input are further broken down into individual words separated by whitespace. You can refer to the fields of a record by a field number, beginning with 1. The last field in each record can be accessed by its number or with the NF special variable, which contains the number of fields in the current record, as shown in Figure 1.3: EXECUTE OPERATION: All AWK commands are applied sequentially on the input (records and fields). By default, AWK executes commands on each record/line. This behavior of AWK can be restricted by the use of patterns. REPEAT OPERATION: The process of read and execute is repeated until the end of the file is reached. The following flowchart depicts the workflow:   We introduced you to the AWK programming language and got ourselves a quick primer to get started with application development. If you found this post is useful, do check out the book Learning AWK Programming to learn more about the intricacies of AWK programming language for text processing. The oldest programming languages in use today What is the difference between functional and object oriented programming? Systems programming with Go in UNIX and Linux  
Read more
  • 0
  • 0
  • 20184
Modal Close icon
Modal Close icon