Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-tips-and-tricks-to-optimize-your-responsive-web-design
Sugandha Lahoti
31 May 2018
11 min read
Save for later

Tips and tricks to optimize your responsive web design

Sugandha Lahoti
31 May 2018
11 min read
Loosely put, website optimization refers to the activities and processes that improve your website's user experience and visibility while reducing the costs associated with hosting your website. In this article, we will learn tips and techniques for client-side optimization. This article is an excerpt from Mastering Bootstrap 4 - Second Edition by Benjamin Jakobus, and Jason Marah. In this book, you will learn to build a customized Bootstrap website from scratch, optimize your website and integrate it with third-party frameworks. CSS optimization Before we even consider compression, minification, and file concatenation, we should think about the ways in which we can simplify and optimize our existing style sheet without using third-party tools. Of course, we should have striven for an optimal style sheet, to begin with, and in many aspects we did. However, our style sheet still leaves room for improvement. Inline styles are bad After reading this article, if you only remember one thing, then let it be that inline styles are bad. Period. Avoid using them whenever possible. Why? That's because not only will they make your website impossible to maintain as the website grows, they also take up precious bytes as they force you to repeat the same rules over and over. Consider the following code piece: <div class="carousel-inner" role="listbox"> <div style="height: 400px" class="carousel-item active"> <img class="d-block img-fluid" src="images/brazil.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Brazil </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/datsun.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Datsun 260Z </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/skydive.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Skydive </div> </div> </div> Note how the rule for defining an item's height, style="height: 400px", is repeated three times, once for each of the three items. That's an additional 21 characters (or 21 bytes, assuming that our document is UTF-8) for each additional image. Multiplying 3*21 gives us 63 bytes, and 21 more bytes for every new image that you want to add. Not to mention that if you ever want to update the height of the images, you will need to manually update the style attribute for every single image. The solution is, of course, to replace the inline styles with an appropriate class. Let's go ahead and define an img class that can be applied to any carousel image: .carousel-item { height: 400px; } Now let's go ahead and remove the style rules: <div class="carousel-inner" role="listbox"> <div style="height: 400px" class="carousel-item active"> <img class="d-block img-fluid" src="images/brazil.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Brazil </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/datsun.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Datsun 260Z </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/skydive.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Skydive </div> </div> </div> That's great! Not only is our CSS now easier to maintain, but we also shaved 29 bytes off our website (the original inline styles required 63 bytes; our new class definition, however, requires only 34 bytes). Yes, this does not seem like much, especially in the world of high-speed broadband, but remember that your website will grow and every byte adds up. Avoid Long identifiers and class names The longer your strings, the larger your files. It's a no-brainer. As such, long identifier and class names naturally increase the size of your web page. Of course, extremely short class or identifier names tend to lack meaning and therefore will make it more difficult (if not impossible) to maintain your page. As such, one should strive for an ideal balance between length and expressiveness. Of course, even better than shortening identifiers is removing them altogether. One handy technique of removing these is to use hierarchical selection. Have a look at an events pagination code piece. For example, we are using the services-events-content identifier within our pagination logic, as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num){ $('#services-events-content div').hide(); var current_page = '#page-' + num; $(current_page).show(); }); To denote the services content, we broke the name of our identifier into three parts, namely, services, events, and content. Our markup is as follows: <div id="services-events-content"> <div id="page-1"> <h3>My Sample Event #1</h3> ... </div> </div> Let's try and get rid of this identifier altogether by observing two characteristics of our Events section: The services-events-content is an indirect descendent of a div with the id services-events. We cannot remove this id as it is required for the menu to work. The element with the id services-events-content is itself a div. If we were to remove its id, we could also remove the entire div. As such, we do not need a second identifier to select the pages that we wish to hide. Instead, all that we need to do is select the div within the div that is within the div that is assigned the id services-events. How do we express this as a CSS selector? It's easy—use #services-events div div div. Also, as such, our pagination logic is updated as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num){ $('#services-events div div div').hide(); var current_page = '#page-' + num; $(current_page).show(); }); Now, save and refresh. What's that? As you clicked on a page, the pagination control disappeared; that's because we are now hiding all div elements that are two div elements down from the element with the id services-events. Move the pagination control div outside its parent element. Our markup should now look as follows: <div role="tabpanel" class="tab-pane active" id="services-events"> <div class="container"> <div class="row"> <div id="page-1"> <h3>My Sample Event #1</h3> <h3>My Sample Event #2</h3> </div> <div id="page-2"> <h3>My Sample Event #3</h3> </div> </div> <div id="services-events-pagination"></div> </div> </div> Now save and refresh. That's better! Last but not least, let's update the css. Take the following code into consideration: #services-events-content div { display: none; } #services-events-content div img { margin-top: 0.5em; margin-right: 1em; } #services-events-content { height: 15em; overflow-y: scroll; } Replace this code with the following: #services-events div div div { display: none; } #services-events div div div img { margin-top: 0.5em; margin-right: 1em; } #services-events div div div { height: 15em; overflow-y: scroll; } That's it, we have simplified our style sheet and saved some bytes in the process! However, we have not really improved the performance of our selector. jQuery executes selectors from right to left, hence executing the last selector first. In this example, jQuery will first scan the complete DOM to discover all div elements (last selector executed first) and then apply a filter to return only those elements that are div, with a div parent, and then select only the ones with ID services-events as parent. While we can't really improve the performance of the selector in this case, we can still simplify our code further by adding a class to each page: <div id="page-1" class="page">...</div> <div id="page-2" class="page">...</div> <div id="page-3" class="page">...</div> Then, all we need to do is select by the given class: $('#services-events div.page').hide();. Alternatively, knowing that this is equal to the DOM element within the .on callback, we can do the following in order to prevent us from iterating through the whole DOM: $(this).parents('#services-vents').find('.page').hide(); The final code will look as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num) { $(this).parents('#services-events').find('.page').hide(); $('#page-' + num).show(); }); Note a micro-optimization in the preceding code—there was no need for us to create that var in memory. Hence, the last line changes to $('#page-' + num).show();. Use Shorthand rules when possible According to the Mozilla Developer Network (shorthand properties, Mozilla Developer Network, https://developer.mozilla.org/en-US/docs/Web/CSS/Shorthand_properties, accessed November 2015), shorthand properties are: "CSS properties that let you set the values of several other CSS properties simultaneously. Using a shorthand property, a Web developer can write more concise and often more readable style sheets, saving time and energy."                                                                                    – Mozilla Developer Network, 2015 Unless strictly necessary, we should never be using longhand rules. When possible, shorthand rules are always the preferred option. Besides the obvious advantage of saving precious bytes, shorthand rules also help increase your style sheet's maintainability. For example, border: 20px dotted #FFF is equivalent to three separate rules: border-style: dotted; border-width: 20px; border-color: #FFF; Group selectors Organizing selectors into groups will arguably also save some bytes. .navbar-myphoto .dropdown-menu > a:hover { color: gray; background-color: #504747; } .navbar-myphoto .dropdown-menu > a:focus { color: gray; background-color: #504747; } .navbar-myphoto .dropdown-menu > .active > a:focus { color: gray; background-color: #504747; } Note how each of the three selectors contains the same declarations, that is, the color and background-color properties are set to the exact same values for each selector. To prevent us from repeating these declarations, we should simply group them (reducing the code from 274 characters to 181 characters): .navbar-myphoto .dropdown-menu > a:hover, .navbar-myphoto .dropdown-menu > a:focus, .navbar-myphoto .dropdown-menu > .active > a:focus { color: gray; background-color: #504747; } Voilà! We just saved 93 bytes! (assuming UTF-8 encoding). Rendering times When optimizing your style rules, the number of bytes should not be your only concern. In fact, it comes secondary to the rendering time of your web page. CSS rules affect the amount of work that is required by the browser to render your page. As such, some rules are more expensive than others. For example, changing the color of an element is cheaper than changing its margin. The reason for this is that a change in color only requires your browser to draw the new pixels. While drawing itself is by no means a cheap operation, changing the margin of an element requires much more effort. Your browser needs to both recalculate the page layout and also draw the changes. Optimizing your page's rendering times is a complex topic, and as such is beyond the scope of this post. However, we recommend that you take a look at http://csstriggers.com/. This site provides a concise overview of the costs involved when updating a given CSS property. Minifying CSS and JavaScript Now it is time to look into minification. Minification is the process of removing redundant characters from a file without altering the actual information contained within it. In other words, minifying the css file will reduce its overall size, while leaving the actual CSS style rules intact. This is achieved by stripping out any whitespace characters within our file. Stripping out whitespace characters has the obvious result that our CSS is now practically unreadable and impossible to maintain. As such, minified style sheets should only be used when serving a page (that is, during production), and not during development. Clearly, minifying your style sheet manually would be an incredibly time-consuming (and hence pointless) task. Therefore, there exist many tools that will do the job for us. One such tool is npm minifier. Visit https://www.npmjs.com/package/minifier for more. Let's go ahead and install it: sudo npm install -g minifier Once installed, we can minify our style sheet by typing the following command: minify path-to-myphoto.css Here, path-to-myphoto.css represents the path to the MyPhoto style sheet. Go ahead and execute the command. Once minification is complete, you should see the Minification complete message. A new CSS file (myphoto.min.css) will have been created inside the directory containing the myphoto.css file. The new file should be 2,465 bytes. Our original myphoto.css file is 3,073 bytes. Minifying our style sheet just reduced the number of bytes to send by roughly 19%! We touched upon the basics of website optimization and testing. In the follow-up article, we will see how to use the build tool Grunt to automate the more common and mundane optimization tasks. To build responsive, dynamic, and mobile-first applications on the web with Bootstrap 4, check out this book  Mastering Bootstrap 4 - Second Edition. Get ready for Bootstrap v4.1; Web developers to strap up their boots How to use Bootstrap grid system for responsive website design? Bootstrap 4 Objects, Components, Flexbox, and Layout
Read more
  • 0
  • 0
  • 19518

article-image-how-we-think-ai-urge-ai-founding-fathers
Neil Aitken
31 May 2018
9 min read
Save for later

We must change how we think about AI, urge AI founding fathers

Neil Aitken
31 May 2018
9 min read
In Manhattan, nearly 15,000 Taxis make around 30 journeys each, per day. That’s nearly half a million paid trips. The yellow cabs are part of the never ending, slow progression of vehicles which churn through the streets of New York. The good news is, after a century of worsening traffic, congestion is about to be ameliorated, at least to a degree. Researchers at MIT announced this week, that they have developed an algorithm to optimise the way taxis find their customers. Their product is allegedly so efficient, it can reduce the required number of cabs (for now, the ones with human drivers) in Manhattan, by a third. That’s a non trivial improvement. The trick, apparently, is to use the cabs as a hustler might cue the ball in Pool – lining the next pick up to start where the last drop off ended. The technology behind the improvement offered by the MIT research team, is the same one that is behind most of the incredible technology news stories of the last 3 years – Artificial Intelligence. AI is now a part of most of the digital interactions we have. It fuels the recommendation engines in YouTube, Spotify and Netflix. It shows you products you might like in Google’s search results and on Amazon’s homepage. Undoubtedly, AI is the hot topic of the time – as you cannot possibly have failed to notice. How AI was created – and nearly died AI was, until recently, a long forgotten scientific curiosity, employed seriously only in Sci-Fi movies. The technology fell in to a ‘Winter’– a time when AI related projects couldn’t get funding and decision makers had given up on the technology - in the late 1980s. It was at that time that much of the fundamental work which underpins today’s AI, concepts like neural networks and backpropagation were codified. Artificial Intelligence is now enjoying a rebirth. Almost every new idea funded by Venture Capitalists has AI baked in. The potential excites business owners, especially those involved in the technology sphere, and scares governments in equal measure. It offers better profits and the potential for mass unemployment as if they are two sides of the same coin. Is is a one in a generation technology improvement, similar to Air Conditioning, mass produced motor car and the smartphone, in that it can be applied to all aspects of the economy at the same time. Just as the iPhone has propelled telecommunications technology forward, and created billions of dollars of sales for phone companies selling mobile data plans, AI is fueling totally new businesses and making existing operations significantly more efficient. Behind the fanfare associated with AI, however, lies a simple truth. Today’s AI algorithms use what’s called ‘narrow’ or ‘domain specific’ intelligence. In simple terms, each current AI implementation is specific to the job it is given. IBM trained their AI system ‘Watson’, to beat human contestants at ‘Jeopardy!’ When Google want to build an ‘AI product’ that can be used to beat a living counterpart at the Chinese board game ‘Go’, they create a new AI system. And so on. A new task requires a new AI system. Judea Pearl, inventor of Bayesian networks and Turing Awardee On AI systems that can move from predicting what will happen to what will cause something Now, one of the people behind those original concepts from the 1980s, which underpin today’s AI solutions is back with an even bigger idea which might push AI forward. Judea Pearl, Chancellor's professor of computer science and statistics at UCLA, and a distinguished visiting professor at the Technion, Israel Institute of Technology was awarded the Turing Award 30 years ago. This award was given to him for the Bayesian mathematical models, which gave modern AI its strength. Pearl’s fundamental contribution to computer science was in providing the logic and decision making framework for computers to operate under uncertainty. Some say it was he who provided the spark which thawed that AI winter. Today, he laments the current state of AI, concerned that the field has evolved very little in the last 3 decades since his important theory was presented. Pearl likens current AI implementations to simple tools which can tell you what’s likely to come next, based on the recognition of a familiar pattern. For example, a medical AI algorithm might be able to look at X-Rays of a human chest and ‘discern’ that the patient has, or does not have, lung cancer based on patterns it has learnt from its training datasets. The AI in this scenario doesn’t ‘know’ what lung cancer is or what a tumor is. Importantly, it is a very long way from understanding that smoking can cause the affliction. What’s needed in AI next, says Pearl, is a critical difference: AIs which are evolved to the point where they can determine not just what will happen next, but what will cause it. It’s a fundamental improvement, of the same magnitude as his earlier contributions. Causality – what Pearl is proposing - is one of the most basic units of scientific thought and progress. The ability to conduct a repeatable experiment, showing that A caused B, in multiple locations and have independent peers review the results is one of the fundamentals of establishing truth. In his most recent publication, ‘The Book Of Why’,  Pearl outlines how we can get AI, from where it is now, to where it can develop an understanding of these causal relationships. He believes the first step is to cement the building blocks of reality – ‘what is a lung’, ‘what is smoke’ and that we’ll be able to do in the next 10 years. Geoff Hinton, Inventor of backprop and capsule nets On AI which more closely mimics the human brain Geoff Hinton’s was the mind behind backpropagation, another of the fundamental technologies which has brought AI to the point it is at today. To progress AI, however, he says we might have to start all over again. Hinton has developed (and produced two papers for the University of Toronto to articulate) a new way of training AI systems, involving something he calls ‘Capsule Networks’ – a concept he’s been working on for 30 years, in an effort to improve the capabilities of the backpropagation algorithms he developed. Capsule networks operate in a manner similar to the human brain. When we see an image, our brains breaks it down to it’s components and processes them in parallel. Some brain neurons recognise edges through contrast differences. Others look for corners by examining the points at which edges intersect. Capsule Networks are similar, several acting on a picture at one time, identifying, for example, an ear or a nose on an animal, irrespective of the angle from which it is being viewed. This is a big deal as until now, CNNs (convolution neural networks), the set of AI algorithms that are most often used in image and video recognition systems, could recognize images as well as humans do. CNNs, however, find it hard to recognize images if their angle is changed. It’s too early to judge whether capsule networks are the key to the next step in the AI revolution, but in many tasks, Capsule Networks are identifying images faster and more accurately than current capabilities allow. Andrew Ng, Chief Scientist at Baidu On AI that can learn without humans Andrew Ng is the co-inventor of Google Brain, the team and project that Alphabet put together in 2011 to explore Artificial Intelligence. He now works for Baidu, China’s most successful search engine – analogous in size and scope to Google in the rest of the world. At the moment, he heads up Baidu’s Silicon Valley AI research facility. Beyond concerns over potential job displacement caused by AI, an issue so significant he says it is perhaps all we should be thinking about when it comes to Artificial Intelligence, he suggests that, in the future, the most progress will be made when AI systems can team themselves without human involvement. At the moment, training an AI, even on something that, to us is simple, such as what a cat looks like, is a complicated process. The procedure involves ‘supervised learning.’ It’s shown a lot of pictures (when they did this at Google, they used 10 million images), some of which are cats - labelled appropriately by humans. Once a sufficient level of ‘education’ has been undertaken, the AI can then accurately label cats, most of the time. Ng thinks supervision is problematic, he describes it as having an Achilles heel in the form of the quantity of data that is required. To go beyond current capabilities, says Ng, will require a completely new type of technology – one which can learn through ‘unsupervised learning’ -  machines learning from data that has not been classified by humans. Progress on unsupervised learning is slow. At both Baidu and Google, engineers are focussing on constrained versions of unsupervised learning such as training AI systems to learn about a human face and then using them to create a face themselves. The activity requires that the AI develops what we would call an ‘internal representation’ of a face – something which is required in any unsupervised learning. Other avenues to train without supervision include, ingeniously, pitting an AI system against a computer game – an environment in which they receive feedback (through points awarded in the game) for ‘constructive’ activities, but within which they are not taught directly by a human. Next generation AI depends on ‘scrubbing away’ existing assumptions Artificial Intelligence, as it stands will deliver economy wide efficiency improvements, the likes of which we have not seen in decades. It seems incredible to think that the field is still in its infancy when it can deliver such substantial benefits – like reduced traffic congestion, lower carbon emissions and saved time in New York Taxis. But it is. Isaac Azimov who developed his own concepts behind how Artificial Intelligence might be trained with simple rules said “Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won't come in.” The author should rest assured. Between them, Pearl, Hinton and Ng are each taking revolutionary approaches to elevate AI beyond even the incredible heights it has reached, and starting without reference to the concepts which have brought us this far. 5 polarizing Quotes from Professor Stephen Hawking on artificial intelligence Toward Safe AI – Maximizing your control over Artificial Intelligence Decoding the Human Brain for Artificial Intelligence to make smarter decisions
Read more
  • 0
  • 0
  • 27545

article-image-java-multithreading-synchronize-threads-implement-critical-sections
Fatema Patrawala
30 May 2018
13 min read
Save for later

Java Multithreading: How to synchronize threads to implement critical sections and avoid race conditions

Fatema Patrawala
30 May 2018
13 min read
One of the most common situations in concurrent programming occurs when more than one execution thread shares a resource. In a concurrent application, it is normal for multiple threads to read or write the same data structure or have access to the same file or database connection. These shared resources can provoke error situations or data inconsistency, and we have to implement some mechanism to avoid these errors. These situations are called race conditions and they occur when different threads have access to the same shared resource at the same time. Therefore, the final result depends on the order of the execution of threads, and most of the time, it is incorrect. You can also have problems with change visibility. So if a thread changes the value of a shared variable, the changes would only be written in the local cache of that thread; other threads will not have access to the change (they will only be able to see the old value). We present to you a java multithreading tutorial taken from the book, Java 9 Concurrency Cookbook - Second Edition, written by Javier Fernández González. The solution to these problems lies in the concept of a critical section. A critical section is a block of code that accesses a shared resource and can't be executed by more than one thread at the same time. To help programmers implement critical sections, Java (and almost all programming languages) offers synchronization mechanisms. When a thread wants access to a critical section, it uses one of these synchronization mechanisms to find out whether there is any other thread executing the critical section. If not, the thread enters the critical section. If yes, the thread is suspended by the synchronization mechanism until the thread that is currently executing the critical section ends it. When more than one thread is waiting for a thread to finish the execution of a critical section, JVM chooses one of them and the rest wait for their turn. Java language offers two basic synchronization mechanisms: The  synchronized keyword The  Lock interface and its implementations In this article, we explore the use of synchronized keyword method to perform synchronization mechanism in Java. So let's get started: Synchronizing a method In this recipe, you will learn how to use one of the most basic methods of synchronization in Java, that is, the use of the synchronized keyword to control concurrent access to a method or a block of code. All the synchronized sentences (used on methods or blocks of code) use an object reference. Only one thread can execute a method or block of code protected by the same object reference. When you use the synchronized keyword with a method, the object reference is implicit. When you use the synchronized keyword in one or more methods of an object, only one execution thread will have access to all these methods. If another thread tries to access any method declared with the synchronized keyword of the same object, it will be suspended until the first thread finishes the execution of the method. In other words, every method declared with the synchronized keyword is a critical section, and Java only allows the execution of one of the critical sections of an object at a time. In this case, the object reference used is the own object, represented by the this keyword. Static methods have a different behavior. Only one execution thread will have access to one of the static methods declared with the synchronized keyword, but a different thread can access other non-static methods of an object of that class. You have to be very careful with this point because two threads can access two different synchronized methods if one is static and the other is not. If both methods change the same data, you can have data inconsistency errors. In this case, the object reference used is the class object. When you use the synchronized keyword to protect a block of code, you must pass an object reference as a parameter. Normally, you will use the this keyword to reference the object that executes the method, but you can use other object references as well. Normally, these objects will be created exclusively for this purpose. You should keep the objects used for synchronization private. For example, if you have two independent attributes in a class shared by multiple threads, you must synchronize access to each variable; however, it wouldn't be a problem if one thread is accessing one of the attributes and the other accessing a different attribute at the same time. Take into account that if you use the own object (represented by the this keyword), you might interfere with other synchronized code (as mentioned before, the this object is used to synchronize the methods marked with the synchronized keyword). In this recipe, you will learn how to use the synchronized keyword to implement an application simulating a parking area, with sensors that detect the following: when a car or a motorcycle enters or goes out of the parking area, an object to store the statistics of the vehicles being parked, and a mechanism to control cash flow. We will implement two versions: one without any synchronization mechanisms, where we will see how we obtain incorrect results, and one that works correctly as it uses the two variants of the synchronized keyword. The example of this recipe has been implemented using the Eclipse IDE. If you use Eclipse or a different IDE, such as NetBeans, open it and create a new Java project. How to do it... Follow these steps to implement the example: First, create the application without using any synchronization mechanism. Create a class named ParkingCash with an internal constant and an attribute to store the total amount of money earned by providing this parking service: public class ParkingCash { private static final int cost=2; private long cash; public ParkingCash() { cash=0; } Implement a method named vehiclePay() that will be called when a vehicle (a car or motorcycle) leaves the parking area. It will increase the cash attribute: public void vehiclePay() { cash+=cost; } Finally, implement a method named close() that will write the value of the cash attribute in the console and reinitialize it to zero: public void close() { System.out.printf("Closing accounting"); long totalAmmount; totalAmmount=cash; cash=0; System.out.printf("The total amount is : %d", totalAmmount); } } Create a class named ParkingStats with three private attributes and the constructor that will initialize them: public class ParkingStats { private long numberCars; private long numberMotorcycles; private ParkingCash cash; public ParkingStats(ParkingCash cash) { numberCars = 0; numberMotorcycles = 0; this.cash = cash; } Then, implement the methods that will be executed when a car or motorcycle enters or leaves the parking area. When a vehicle leaves the parking area, cash should be incremented: public void carComeIn() { numberCars++; } public void carGoOut() { numberCars--; cash.vehiclePay(); } public void motoComeIn() { numberMotorcycles++; } public void motoGoOut() { numberMotorcycles--; cash.vehiclePay(); } Finally, implement two methods to obtain the number of cars and motorcycles in the parking area, respectively. Create a class named Sensor that will simulate the movement of vehicles in the parking area. It implements the Runnable interface and has a ParkingStats attribute, which will be initialized in the constructor: public class Sensor implements Runnable { private ParkingStats stats; public Sensor(ParkingStats stats) { this.stats = stats; } Implement the run() method. In this method, simulate that two cars and a motorcycle arrive in and then leave the parking area. Every sensor will perform this action 10 times: @Override public void run() { for (int i = 0; i< 10; i++) { stats.carComeIn(); stats.carComeIn(); try { TimeUnit.MILLISECONDS.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stats.motoComeIn(); try { TimeUnit.MILLISECONDS.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stats.motoGoOut(); stats.carGoOut(); stats.carGoOut(); } } Finally, implement the main method. Create a class named Main with the main() method. It needs ParkingCash and ParkingStats objects to manage parking: public class Main { public static void main(String[] args) { ParkingCash cash = new ParkingCash(); ParkingStats stats = new ParkingStats(cash); System.out.printf("Parking Simulatorn"); Then, create the Sensor tasks. Use the availableProcessors() method (that returns the number of available processors to the JVM, which normally is equal to the number of cores in the processor) to calculate the number of sensors our parking area will have. Create the corresponding Thread objects and store them in an array: intnumberSensors=2 * Runtime.getRuntime() .availableProcessors(); Thread threads[]=new Thread[numberSensors]; for (int i = 0; i<numberSensors; i++) { Sensor sensor=new Sensor(stats); Thread thread=new Thread(sensor); thread.start(); threads[i]=thread; } Then wait for the finalization of the threads using the join() method: for (int i=0; i<numberSensors; i++) { try { threads[i].join(); } catch (InterruptedException e) { e.printStackTrace(); } } Finally, write the statistics of Parking: System.out.printf("Number of cars: %dn", stats.getNumberCars()); System.out.printf("Number of motorcycles: %dn", stats.getNumberMotorcycles()); cash.close(); } } In our case, we executed the example in a four-core processor, so we will have eight Sensor tasks. Each task performs 10 iterations, and in each iteration, three vehicles enter the parking area and the same three vehicles go out. Therefore, each Sensor task will simulate 30 vehicles. If everything goes well, the final stats will show the following: There are no cars in the parking area, which means that all the vehicles that came into the parking area have moved out Eight Sensor tasks were executed, where each task simulated 30 vehicles and each vehicle was charged 2 dollars each; therefore, the total amount of cash earned was 480 dollars When you execute this example, each time you will obtain different results, and most of them will be incorrect. The following screenshot shows an example: We had race conditions, and the different shared variables accessed by all the threads gave incorrect results. Let's modify the previous code using the synchronized keyword to solve these problems: First, add the synchronized keyword to the vehiclePay() method of the ParkingCash class: public synchronized void vehiclePay() { cash+=cost; } Then, add a synchronized block of code using the this keyword to the close() method: public void close() { System.out.printf("Closing accounting"); long totalAmmount; synchronized (this) { totalAmmount=cash; cash=0; } System.out.printf("The total amount is : %d",totalAmmount); } Now add two new attributes to the ParkingStats class and initialize them in the constructor of the class: private final Object controlCars, controlMotorcycles; public ParkingStats (ParkingCash cash) { numberCars=0; numberMotorcycles=0; controlCars=new Object(); controlMotorcycles=new Object(); this.cash=cash; } Finally, modify the methods that increment and decrement the number of cars and motorcycles, including the synchronized keyword. The numberCars attribute will be protected by the controlCars object, and the numberMotorcycles attribute will be protected by the controlMotorcycles object. You must also synchronize the getNumberCars() and getNumberMotorcycles() methods with the associated reference object: public void carComeIn() { synchronized (controlCars) { numberCars++; } } public void carGoOut() { synchronized (controlCars) { numberCars--; } cash.vehiclePay(); } public void motoComeIn() { synchronized (controlMotorcycles) { numberMotorcycles++; } } public void motoGoOut() { synchronized (controlMotorcycles) { numberMotorcycles--; } cash.vehiclePay(); } Execute the example now and see the difference when compared to the previous version. How it works... The following screenshot shows the output of the new version of the example. No matter how many times you execute it, you will always obtain the correct result: Let's see the different uses of the synchronized keyword in the example: First, we protected the vehiclePay() method. If two or more Sensor tasks call this method at the same time, only one will execute it and the rest will wait for their turn; therefore, the final amount will always be correct. We used two different objects to control access to the car and motorcycle counters. This way, one Sensor task can modify the numberCars attribute and another Sensor task can modify the numberMotorcycles attribute at the same time; however, no two Sensor tasks will be able to modify the same attribute at the same time, so the final value of the counters will always be correct. Finally, we also synchronized the getNumberCars() and getNumberMotorcycles() methods. Using the synchronized keyword, we can guarantee correct access to shared data in concurrent applications. As mentioned at the introduction of this recipe, only one thread can access the methods of an object that uses the synchronized keyword in their declaration. If thread (A) is executing a synchronized method and thread (B) wants to execute another synchronized method of the same object, it will be blocked until thread (A) is finished. But if thread (B) has access to different objects of the same class, none of them will be blocked. When you use the synchronized keyword to protect a block of code, you use an object as a parameter. JVM guarantees that only one thread can have access to all the blocks of code protected with this object (note that we always talk about objects, not classes). We used the TimeUnit class as well. The TimeUnit class is an enumeration with the following constants: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS, and SECONDS. These indicate the units of time we pass to the sleep method. In our case, we let the thread sleep for 50 milliseconds. There's more... The synchronized keyword penalizes the performance of the application, so you must only use it on methods that modify shared data in a concurrent environment. If you have multiple threads calling a synchronized method, only one will execute them at a time while the others will remain waiting. If the operation doesn't use the synchronized keyword, all the threads can execute the operation at the same time, reducing the total execution time. If you know that a method will not be called by more than one thread, don't use the synchronized keyword. Anyway, if the class is designed for multithreading access, it should always be correct. You must promote correctness over performance. Also, you should include documentation in methods and classes in relation to their thread safety. You can use recursive calls with synchronized methods. As the thread has access to the synchronized methods of an object, you can call other synchronized methods of that object, including the method that is being executed. It won't have to get access to the synchronized methods again. We can use the synchronized keyword to protect access to a block of code instead of an entire method. We should use the synchronized keyword in this way to protect access to shared data, leaving the rest of the operations out of this block and obtaining better performance of the application. The objective is to have the critical section (the block of code that can be accessed only by one thread at a time) as short as possible. Also, avoid calling blocking operations (for example, I/O operations) inside a critical section. We have used the synchronized keyword to protect access to the instruction that updates the number of persons in the building, leaving out the long operations of the block that don't use shared data. When you use the synchronized keyword in this way, you must pass an object reference as a parameter. Only one thread can access the synchronized code (blocks or methods) of this object. Normally, we will use the this keyword to reference the object that is executing the method: synchronized (this) { // Java code } To summarize, we learnt to use the synchronized  keyword method for multithreading in Java to perform synchronization mechasim. You read an excerpt from the book Java 9 Concurrency Cookbook - Second Edition. This book will help you master the art of fast, effective Java development with the power of concurrent and parallel programming. Concurrency programming 101: Why do programmers hang by a thread? How to create multithreaded applications in Qt Getting Inside a C++ Multithreaded Application
Read more
  • 0
  • 0
  • 56785

article-image-dont-call-us-ninjas-or-rockstars-say-developers
Richard Gall
30 May 2018
5 min read
Save for later

Don't call us ninjas or rockstars, say developers

Richard Gall
30 May 2018
5 min read
Words like 'ninja' and 'rockstar' have been flying around the tech world for some time now. Data revealed by recruitment website Indeed at the end of 2017 showed that the term 'rockstar' has increased in job postings 19% since 2015. We seem to live in a world where 'sexing up' job roles has become the norm. And when top talent is hard to come by it makes sense. The words offer some degree of status to candidates and, and imply the organizations behind them are forward-thinking. But it's starting to get boring. In this year's Skill Up survey, 57% of respondents said they didn't like creative terms like 'rockstar' 'ninja' and 'wizard' - only 26% said they actually liked the term. While words like these might be boring, they can be harmful too. In an age of spin and fake news, using language to dress up and redefine things can have an impact we might not expect. Sign up to the Packt Hub weekly newsletter and receive a free PDF of this year's Skill Up report. Using words like rockstar and ninja pits developers against each other The industry's insistence on using these words in everything from recruitment to conferences cultivates a bizarre class system within tech. When we start calling people rockstars, it suggests something about the role they play within a company or engineering team. It says 'these people are doing something really exciting' while everyone else, presumably, isn't. While it's true that hierarchies are part and parcel of any modern organization, this superficial labeling isn't helpful. 'Collaboration' and 'agile' are buzzwords that are as overused as ninja and rockstar, but at least they offer something practical and positive. And let's be honest - collaborating is important if we're to build better software and have better working lives. The unforeseen impact on developer mental health An unforeseen affect of these words could be a negative impact on mental health in the tech industry. We already know that burnout is becoming a common occurrence, as tech professionals are being overworked and pushed to breaking point. While this is particularly true of startup culture, where engineers are driven to develop products on incredibly tight schedules as owners seek investment and investors look for signs of growth, but this trend is growing. Arguably, the gap between management and engineers is playing into this. The results of innovation look shiny and exciting, but the actual work - which can, as we know, be repetitive, boring, hard as it can be enjoyable - isn't properly understood. Ninjas, rockstars and the commodification of technical skill It has become a truism that communication is important when it comes to tech. Words like ninja and rockstar are making communication hard - they undermine our ability to communicate. They conceal the work that engineers actually do. It's great that they shine a spotlight on technical professionals and skills, but they do so in a way that is actually quite euphemistic. It's hints at the value of the skills, but actually fails to engage with why these skills are important, how you develop them, and how they can be properly leveraged. More specifically, words like 'ninja' and 'rockstar' turn technical expertise into a product. It makes skills marketable; it turns knowledge into a commodity. Good for you if that helps you earn a little more money in your next job; even better if it lands you a book deal or a spot speaking at a conference. But all you're really doing is taking advantage of the technical skills bubble. It won't last forever, and in the long run it probably won't be good news for the industry. Some people will be overvalued, while others will be undervalued. Ninjas, rockstars and open source culture These irritating euphemisms come from a number of different sources. Tech recruitment has played a big part, as companies try and attract top tech talent. So has modern corporate culture, which has been trying to loosen its proverbial tie for the last decade. But it's also worth noting that rockstars and ninjas come out of open source culture. This is a culture that is celebrated for its lack of authority. However, with this decline, it makes opens up a space for 'experts' to take the lead. In the past, we might have quaintly referred to these people as 'community figures'. As open source has moved mainstream, the commodification of technical expertise found form in the tech ninja and rockstar. But while rockstars and ninjas appear to be the shining lights of open source culture, they might also be damaging to it as well. If open source culture is 'led' by a number of people the world begins referring to as rockstars, the very foundations of it begin to move. It's no longer quite as open as it used to be. True, perhaps we need rockstars and ninjas in open source. These are people that evangelize about certain projects. These are people who can offer unique and useful perspectives on important debates and issues in their respective fields, right? Well, sort of. It is important for people to discuss new ideas and pioneer new ways of doing things. But this doesn't mean we need to sex things up. After all, certain people aren't more entitled to an opinion just because they have a book deal. Yes they have experience, but it's important that communities don't get left behind as the tech industry chases after the dream of relentless innovation. Of course it's great to talk about, but we've all got to do the work too.
Read more
  • 0
  • 0
  • 18443

article-image-endpoint-operations-management-agent-vrealize-operations
Vijin Boricha
30 May 2018
10 min read
Save for later

How to ace managing the Endpoint Operations Management Agent with vROps

Vijin Boricha
30 May 2018
10 min read
In this tutorial, you will learn the functionalities of endpoint operations management agent and how you can effectively manage it. You can install the Endpoint Operations Management Agent from a tar.gz or .zip archive, or from an operating system-specific installer for Windows or for Linux-like systems that support RPM. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater.  The agent can come bundled with JRE or without JRE. Installing the Agent Before you start, you have to make sure you have vRealize Operations user credentials to ensure that you have enough permissions to deploy the agent. Let's see how can we install the agent on both Linux and Windows machines. Manually installing the Agent on a Linux Endpoint Perform the following steps to install the Agent on a Linux machine: Download the appropriate distributable package from https://my.vmware.com and copy it over to the machine. The Linux .rpm file should be installed with the root credentials. Log in to the machine and run the following command to install the agent: [root]# rpm -Uvh <DistributableFileName> In this example, we are installing on CentOS. Only when we start the service will it generate a token and ask for the server's details. Start the Endpoint Operations Management Agent service by running: service epops-agent start The previous command will prompt you to enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP: If the server is on the same machine as the agent, you can enter the localhost. If a firewall is blocking traffic from the agent to the server, allow the traffic to that address through the firewall. Default port: Specify the SSL port on the vRealize Operations server to which the agent must connect. The default port is 443. Certificate to trust: You can configure Endpoint Operations Management agents to trust the root or intermediate certificate to avoid having to reconfigure all agents if the certificate on the analytics nodes and remote collectors are modified. vRealize Operations credentials: Enter the name of a vRealize Operations user with agent manager permissions. The agent token is generated by the Endpoint Operations Management Agent. It is only created the first time the endpoint is registered to the server. The name of the token is random and unique, and is based on the name and IP of the machine. If the agent is reinstalled on the Endpoint, the same token is used, unless it is deleted manually. Provide the necessary information, as shown in the following screenshot: Go into the vRealize Operations UI and navigate to Administration | Configuration | Endpoint Operations, and select the Agents tab: After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can also verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page. If you encounter issues during the installation and configuration, you can check the /opt/vmware/epops-agent/log/agent.log log file for more information. We will examine this log file in more detail later in this book. Manually installing the agent on a Windows Endpoint On a Windows machine, the Endpoint Operations Management Agent is installed as a service. The following steps outline how to install the Agent via the .zip archive bundle, but you can also install it via an executable file. Perform the following steps to install the Agent on a Windows machine: Download the agent .zip file. Make sure the file version matches your Microsoft Windows OS. Open Command Prompt and navigate to the bin folder within the extracted folder. Run the following command to install the agent: ep-agent.bat install After the install, is complete, start the agent by executing: ep-agent.bat start If this is the first time you are installing the agent, a setup process will be initiated. If you have specified configuration values in the agent properties file, those will be used. Upon starting the service, it will prompt you for the same values as it did when we installed in on a Linux machine. Enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP Default port Certificate to trust vRealize Operations credentials If the agent is reinstalled on the endpoint, the same token is used, unless it is deleted manually. After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page: Because the agent is collecting data, the collection status of the agent is green. Alternatively, you can also verify the collection status of the Endpoint Operations Management Agent by navigating to Administration | Configuration | Endpoint Operations, and selecting the Agents tab. Automated agent installation using vRealize Automation In a typical environment, the Endpoint Operations Management Agent will not be installed on many/all virtual machines or physical servers. Usually, only those servers holding critical applications or services will be monitored using the agent. Of those servers which are not, manually installing and updating the Endpoint Operations Management Agent can be done with reasonable administrative effort. If we have an environment where we need to install the agent on many VMs, and we are using some kind of provisioning and automation engine like Microsoft System Center Configuration Manager or VMware vRealize Automation, which has application-provisioning capabilities, it is not recommended to install the agent in the VM template that will be cloned. If for whatever reason you have to do it, do not start the Endpoint Operations Management Agent, or remove the Endpoint Operations Management token and data before the cloning takes place. If the Agent is started before the cloning, a client token is created and all clones will be the same object in vRealize Operations. In the following example, we will show how to install the Agent via vRealize Automation as part of the provisioning process. I will not go into too much detail on how to create blueprints in vRealize Automation, but I will give you the basic steps to create a software component that will install the agent and add it to a blueprint. Perform the following steps to create a software component that will install the Endpoint Operations Management Agent and add it to a vRealize Automation Blueprint: Open the vRealize Automation portal page and navigate to Design, the Software Components tab, and click New to create a new software component. On the General tab, fill in the information. Click Properties and click New . Add the following six properties, as shown in the following table: Name Type Value Encrypted Overridable Required Computed RPMNAME String The name of the Agent RPM package No Yes No No SERVERIP String The IP of the vRealize Operations server/cluster No Yes No No SERVERLOGIN String Username with enough permission in vRealize Operations No Yes No No SERVERPWORD String Password for the user No Yes No No SRVCRTTHUMB String The thumbprint of the vRealize Operations certificate No Yes No No SSLPORT String The port on which to communicate with vRealize Operations No Yes No No These values will be used to configure the Agent once it is installed. The following screenshot illustrates some example values: Click Actions and configure the Install, Configure, and Start actions, as shown in the following table: Life Cycle Stage Script Type Script Reboot Install Bash rpm -i http://<FQDNOfThe ServerWhere theRPMFileIsLocated>/$RPMNAME No (unchecked) Configure Bash sed -i "s/#agent.setup.serverIP=localhost/agent.setup.serverIP=$SERVERIP/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverSSLPort=443/agent.setup.serverSSLPort=$SSLPORT/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverLogin=username/agent.setup.serverLogin=$SERVERLOGIN/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverPword=password/agent.setup.serverPword=$SERVERPWORD/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverCertificateThumbprint=/agent.setup.serverCertificateThumbprint=$SRVCRTTHUMB/" /opt/vmware/epops-agent/conf/agent.properties No (unchecked) Start Bash /sbin/service epops-agent start /sbin/chkconfig epops-agent on No (unchecked) The following screenshot illustrates some example values: Click Ready to Complete and click Finish. Edit the blueprint to where you want the agent to be installed. From Categories, select Software Components. Select the Endpoint Operations Management software component and drag and drop it over the machine type (virtual machine) where you want the object to be installed: Click Save to save the blueprint. This completes the necessary steps to automate the Endpoint Operations Management Agent installation as a software component in vRealize Automation. With every deployment of the blueprint, after cloning, the Agent will be installed and uniquely identified to vRealize Operations. Reinstalling the agent When reinstalling the agent, various elements are affected, like: Already-collected metrics Identification tokens that enable a reinstalled agent to report on the previously-discovered objects on the server The following locations contain agent-related data which remains when the agent is uninstalled: The data folder containing the keystore The epops-token platform token file which is created before agent registration Data continuity will not be affected when the data folder is deleted. This is not the case with the epops-token file. If you delete the token file, the agent will not be synchronized with the previously discovered objects upon a new installation. Reinstalling the agent on a Linux Endpoint Perform the following steps to reinstall the agent on a Linux machine: If you are removing the agent completely and are not going to reinstall it, delete the data directory by running: $ rm -rf /opt/epops-agent/data If you are completely removing the client and are not going to reinstall or upgrade it, delete the epops-token file: $ rm /etc/vmware/epops-token Run the following command to uninstall the Agent: $ yum remove <AgentFullName> If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. Reinstalling the Agent on a Windows Endpoint Perform the following steps to reinstall the Agent on a Windows machine: From the CLI, change the directory to the agent bin directory: cd C:epops-agentbin Stop the agent by running: C:epops-agentbin> ep-agent.bat stop Remove the agent service by running: C:epops-agentbin> ep-agent.bat remove If you are completely removing the client and are not going to reinstall it, delete the data directory by running: C:epops-agent> rd /s data If you are completely removing the client and are not going to reinstall it, delete the epops-token file using the CLI or Windows Explorer: C:epops-agent> del "C:ProgramDataVMwareEP Ops agentepops-token" Uninstall the agent from the Control Panel. If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. You've become familiar with the Endpoint Operations Management Agent and what it can do. We also discussed how to install and reinstall it on both Linux and Windows operating systems. To know more about vSphere and vRealize automation workload placement, check out our book  Mastering vRealize Operations Manager - Second Edition. VMware vSphere storage, datastores, snapshots What to expect from vSphere 6.7 KVM Networking with libvirt
Read more
  • 0
  • 0
  • 17855

article-image-diy-selfie-drone-arduino-esp8266
Vijin Boricha
29 May 2018
10 min read
Save for later

How to assemble a DIY selfie drone with Arduino and ESP8266

Vijin Boricha
29 May 2018
10 min read
Have you ever thought of something that can take a photo from the air, or perhaps take a selfie from it? How about we build a drone for taking selfies and recording videos from the air? Taking photos from the sky is one of the most exciting things in photography this year. You can shoot from helicopters, planes, or even from satellites. Unless you own a personal air vehicle or someone you know does, you know this is a costly affair sure to burn through your pockets. Drones can come in handy here. Have ever googled drone photography? If you did, I am sure you'd want to build or buy a drone for photography, because of the amazing views of the common subjects taken from the sky. Today, we will learn to build a drone for aerial photography and videography. This tutorial is an excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. Assuming you know how to build your customized frame if not you can refer to our book, or you may buy HobbyKing X930 glass fiber frame and connect the parts together, as directed in the manual. However, I have a few suggestions to help you carry out a better assembly of the frame: Firstly, connect the motor mounted with the legs or wings or arms of the frame. Tighten them firmly, as they will carry and hold the most important equipment of the drone. Then, connect them to the base and, later other parts with firm connections. Now, we will calibrate our ESCs. We will take the signal cable from an ESC (the motor is plugged into the ESC; careful, don't connect the propeller) and connect it to the throttle pins on the radio. Make sure the transmitter is turned on and the throttle is in the lowest position. Now, plug the battery into the ESC and you will hear a beep. Now, gradually increase the throttle from the transmitter. Your motor will start spinning at any position. This is because the ESC is not calibrated. So, you need to tell the ESC where the high point and the low point of the throttle are. Disconnect the battery first. Increase the throttle of the transmitter to the highest position and power the ESC. Your ESC will now beep once and beep 3 times in every 4 seconds. Now, move the throttle to the bottommost position and you will hear the ESC beep as if it is ready and calibrated. Now, you can increase the throttle of the transmitter and will see from lower to higher, the throttle will work. Now, mount the motors, connect them to the ESCs, and then connect them to the ArduPilot, changing the pins gradually. Now, connect your GPS to the ArduPilot and calibrate it. Now, our drone is ready to fly. I would suggest you fly the drone for about 10-15 minutes before connecting the camera. Connecting the camera For a photography drone, connecting the camera and controlling the camera is one of the most important things. Your pictures and videos will be spoiled if you cannot adjust the camera and stabilize it properly. In our case, we will use a camera gimbal to hold the camera and move it from the ground. Choosing a gimbal The camera gimbal holds the camera for you and can move the camera direction according to your command. There are a number of camera gimbals out there. You can choose any type, depending on your demand and camera size and specification. If you want to use a DSLR camera, you should use a bigger gimbal and, if you use a point and shoot type camera or action camera, you may use small- or medium-sized gimbals. There are two types of gimbals, a brushless gimbal, and a standard gimbal. The standard gimbal has servo motors and gears. If you use an FPV camera, then a standard gimbal with a 2-axis manual mount is the best option. The standard gimbal is not heavy; it is lightweight and not expensive. The best thing is you will not need an external controller board for your standard camera gimbal. The brushless gimbal is for professional aero photographers. It is smooth and can shoot videos or photos with better quality. The brushless gimbal will need an external controller board for your drone and the brushless gimbal is heavier than the standard gimbal. Choosing the best gimbal is one of the hard things for a photographer, as the stabilization of the image is a must for photoshoots. If you cannot control the camera from the ground, then using a gimbal is worthless. The following picture shows a number of gimbals: After choosing your camera and the gimbal, the first thing is to mount the gimbal and the camera to the drone. Make sure the mount is firm, but not too hard, because it will make the camera shake while flying the drone. You may use the Styrofoam or rubber pieces that came with the gimbal to reduce the vibration and make the image stable. Configuring the camera with the ArduPilot Configuring the camera with the ArduPilot is easy. Before going any further, let us learn a few things about the camera gimbal's Euler angels: Tilt: This moves the camera sloping position (range -90 degrees to +90 degrees), it is the motion (clockwise-anticlockwise) with the vertical axis Roll: This is a motion ranging from 0 degrees to 360 degrees parallel to the horizontal axis Pan: This is the same type motion of roll ranging from 0 degrees to 360 degrees but in the vertical axis Shutter: This is a switch that triggers a click or sends a signal Firstly, we are going to use the standard gimbal. Basically, there are two servos in a standard gimbal. One is for pitch or tilt and another is for the roll. So, a standard gimbal gives you a two-dimensional motion with the camera viewpoint. Connection Follow these steps to connect the camera to the ArduPilot: Take the pitch servo's signal pin and connect it to the 11th pin of the ArduPilot (A11) and the roll signal to the 10th pin (A10). Make sure you connect only the signal (S pin) cable of the servos to the pin, not the other two pins (ground and the VCC). The signal cables must be connected to the innermost pins of the A11 and A10 pins (two pins make a raw; see the following picture for clarification): My suggestion is adding an extra battery for your gimbal's servos. If you want to connect your servo directly to the ArduPilot, your ArduPilot will not perform well, as the servos will draw power. Now, connect your ArduPilot to your PC using wire or telemetry. Go to the Initial Setup menu and, under Optional Hardware, you will find another option called Camera Gimbal. Click on this and you will see the following screen: For the Tilt, change the pin to RC11; for the Roll, change the pin to RC10; and for Shutter, change it to CH7. If you want to change the Tilt during the flight from the transmitter, you need to change the Input Ch of the Tilt. See the following screenshot: Now, you need to change an option in the Configuration | Extended Tuning page. Set Ch6 Opt to None, as in the following screenshot, and hit the Write Params button: We need to align the minimum and maximum PWM values for the servos of the gimbal. To do that, we can tilt the frame of the gimbal to the leftmost position and from the transmitter, move the knob to the minimum position and start increasing, your servo will start to move at any time, then stop moving the knob. For the maximum calibration, move the Tilt to the rightmost position and do the same thing for the knob with the maximum position. Do the same thing for the pitch with the forward and backward motion. We also need to level the gimbal for better performance. To do that, you need to keep the gimbal frame level to the ground and set the Camera Gimbal option, the Servo Limits, and the Angle Limits. Change them as per the level of the frame. Controlling the camera Controlling the camera to take selfies or record video is easy. You can use the shutter pin we used before or the camera's mobile app for controlling the camera. My suggestion is to use the camera's app to take shots because you will get a live preview of what you are shooting and it will be easy to control the camera shots. However, if you want to use the Shutter button manually from the transmitter then you can do this too. We have connected the RC7 pin for controlling a servo. You can use a servo or a receiver switch for your camera to manually trigger the shutter. To do that, you can buy a receiver controller on/off switch. You can use this switch for various purposes. Clicking the shutter of your camera is one of them. Manually triggering the camera is easy. It is usually done for point and shoot cameras. To do that, you need to update the firmware of your cameras. You can do this in many ways, but the easiest one will be discussed here. Your RECEIVER CONTROLLED ON/OFF SWITCH may look like the following: You can see five wires in the picture. The three wires together are, as usual, pins of the servo motor. Take out the signal cable (in this case, this is the yellow cable) and connect it to the RC7 pin of the ArduPilot. Then, connect the positive to one of the thick red wires. Take the camera's data cable and connect the other tick wire to the positive of the USB cable and the negative wire will be connected to the negative of the three connected wires. Then, an output of the positive and negative wire will go to the battery (an external battery is suggested for the camera). To upgrade the camera firmware, you need to go to the camera's website and upgrade the firmware for the remote shutter option. In my case, the website is http://chdk.wikia.com/wiki/CHDK . I have downloaded it for a Canon point and shoot camera. You can also use action cameras for your drones. They are cheap and can be controlled remotely via mobile applications. Flying and taking shots Flying the photography drone is not that difficult. My suggestion is to lock the altitude and fly parallel to the ground. If you use a camera remote controller or an app, then it is really easy to take the photo or record a video. However, if you use the switch, as we discussed, then you need to open and connect your drone to the mission planner via telemetry. Go to the flight data, right click on the map, and then click the Trigger Camera Now option. It will trigger the Camera Shutter button and start recording or take a photo. You can do this when your drone is in a locked position and, using the timer, take a shot from above, which can be a selfie too. Let's try it. Let me know what happens and whether you like it or not. Next, learn to build other drones like a mission control drone or gliding drones from our book Building Smart Drones with ESP8266 and Arduino. Drones: Everything you ever wanted to know! How to build an Arduino based ‘follow me’ drone Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 38792
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-to-build-deep-convolutional-gan-using-tensorflow-and-keras
Savia Lobo
29 May 2018
13 min read
Save for later

How to build Deep convolutional GAN using TensorFlow and Keras

Savia Lobo
29 May 2018
13 min read
In this tutorial, we will learn to build both simple and deep convolutional GAN models with the help of TensorFlow and Keras deep learning frameworks. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango.[/box] Simple GAN with TensorFlow For building the GAN with TensorFlow, we build three networks, two discriminator models, and one generator model with the following steps: Start by adding the hyper-parameters for defining the network: # graph hyperparameters g_learning_rate = 0.00001 d_learning_rate = 0.01 n_x = 784 # number of pixels in the MNIST image # number of hidden layers for generator and discriminator g_n_layers = 3 d_n_layers = 1 # neurons in each hidden layer g_n_neurons = [256, 512, 1024] d_n_neurons = [256] # define parameter ditionary d_params = {} g_params = {} activation = tf.nn.leaky_relu w_initializer = tf.glorot_uniform_initializer b_initializer = tf.zeros_initializer Next, define the generator network: z_p = tf.placeholder(dtype=tf.float32, name='z_p', shape=[None, n_z]) layer = z_p # add generator network weights, biases and layers with tf.variable_scope('g'): for i in range(0, g_n_layers): w_name = 'w_{0:04d}'.format(i) g_params[w_name] = tf.get_variable( name=w_name, shape=[n_z if i == 0 else g_n_neurons[i - 1], g_n_neurons[i]], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) g_params[b_name] = tf.get_variable( name=b_name, shape=[g_n_neurons[i]], initializer=b_initializer()) layer = activation( tf.matmul(layer, g_params[w_name]) + g_params[b_name]) # output (logit) layer i = g_n_layers w_name = 'w_{0:04d}'.format(i) g_params[w_name] = tf.get_variable( name=w_name, shape=[g_n_neurons[i - 1], n_x], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) g_params[b_name] = tf.get_variable( name=b_name, shape=[n_x], initializer=b_initializer()) g_logit = tf.matmul(layer, g_params[w_name]) + g_params[b_name] g_model = tf.nn.tanh(g_logit) Next, define the weights and biases for the two discriminator networks that we shall build: with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) d_params[w_name] = tf.get_variable( name=w_name, shape=[n_x if i == 0 else d_n_neurons[i - 1], d_n_neurons[i]], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) d_params[b_name] = tf.get_variable( name=b_name, shape=[d_n_neurons[i]], initializer=b_initializer()) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) d_params[w_name] = tf.get_variable( name=w_name, shape=[d_n_neurons[i - 1], 1], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) d_params[b_name] = tf.get_variable( name=b_name, shape=[1], initializer=b_initializer()) Now using these parameters, build the discriminator that takes the real images as input and outputs the classification: # define discriminator_real # input real images x_p = tf.placeholder(dtype=tf.float32, name='x_p', shape=[None, n_x]) layer = x_p with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) layer = activation( tf.matmul(layer, d_params[w_name]) + d_params[b_name]) layer = tf.nn.dropout(layer,0.7) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) d_logit_real = tf.matmul(layer, d_params[w_name]) + d_params[b_name] d_model_real = tf.nn.sigmoid(d_logit_real)  Next, build another discriminator network, with the same parameters, but providing the output of generator as input: # define discriminator_fake # input generated fake images z = g_model layer = z with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) layer = activation( tf.matmul(layer, d_params[w_name]) + d_params[b_name]) layer = tf.nn.dropout(layer,0.7) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) d_logit_fake = tf.matmul(layer, d_params[w_name]) + d_params[b_name] d_model_fake = tf.nn.sigmoid(d_logit_fake) Now that we have the three networks built, the connection between them is made using the loss, optimizer and training functions. While training the generator, we only train the generator's parameters and while training the discriminator, we only train the discriminator's parameters. We specify this using the var_list parameter to the optimizer's minimize() function. Here is the complete code for defining the loss, optimizer and training function for both kinds of network: g_loss = -tf.reduce_mean(tf.log(d_model_fake)) d_loss = -tf.reduce_mean(tf.log(d_model_real) + tf.log(1 - d_model_fake)) g_optimizer = tf.train.AdamOptimizer(g_learning_rate) d_optimizer = tf.train.GradientDescentOptimizer(d_learning_rate) g_train_op = g_optimizer.minimize(g_loss, var_list=list(g_params.values())) d_train_op = d_optimizer.minimize(d_loss, var_list=list(d_params.values()))  Now that we have defined the models, we have to train the models. The training is done as per the following algorithm: For each epoch: For each batch: get real images x_batch generate noise z_batch train discriminator using z_batch and x_batch generate noise z_batch train generator using z_batch The complete code for training from the notebook is as follows: n_epochs = 400 batch_size = 100 n_batches = int(mnist.train.num_examples / batch_size) n_epochs_print = 50 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(n_epochs): epoch_d_loss = 0.0 epoch_g_loss = 0.0 for batch in range(n_batches): x_batch, _ = mnist.train.next_batch(batch_size) x_batch = norm(x_batch) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) feed_dict = {x_p: x_batch,z_p: z_batch} _,batch_d_loss = tfs.run([d_train_op,d_loss], feed_dict=feed_dict) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) feed_dict={z_p: z_batch} _,batch_g_loss = tfs.run([g_train_op,g_loss], feed_dict=feed_dict) epoch_d_loss += batch_d_loss epoch_g_loss += batch_g_loss if epoch%n_epochs_print == 0: average_d_loss = epoch_d_loss / n_batches average_g_loss = epoch_g_loss / n_batches print('epoch: {0:04d} d_loss = {1:0.6f} g_loss = {2:0.6f}' .format(epoch,average_d_loss,average_g_loss)) # predict images using generator model trained x_pred = tfs.run(g_model,feed_dict={z_p:z_test}) display_images(x_pred.reshape(-1,pixel_size,pixel_size)) We printed the generated images every 50 epochs: As we can see the generator was producing just noise in epoch 0, but by epoch 350, it got trained to produce much better shapes of handwritten digits. You can try experimenting with epochs, regularization, network architecture and other hyper-parameters to see if you can produce even faster and better results. Simple GAN with Keras Now let us implement the same model in Keras:  The hyper-parameter definitions remain the same as the last section: # graph hyperparameters g_learning_rate = 0.00001 d_learning_rate = 0.01 n_x = 784 # number of pixels in the MNIST image # number of hidden layers for generator and discriminator g_n_layers = 3 d_n_layers = 1 # neurons in each hidden layer g_n_neurons = [256, 512, 1024] d_n_neurons = [256]  Next, define the generator network: # define generator g_model = Sequential() g_model.add(Dense(units=g_n_neurons[0], input_shape=(n_z,), name='g_0')) g_model.add(LeakyReLU()) for i in range(1,g_n_layers): g_model.add(Dense(units=g_n_neurons[i], name='g_{}'.format(i) )) g_model.add(LeakyReLU()) g_model.add(Dense(units=n_x, activation='tanh',name='g_out')) print('Generator:') g_model.summary() g_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=g_learning_rate) ) This is what the generator model looks like: In the Keras example, we do not define two discriminator networks as we defined in the TensorFlow example. Instead, we define one discriminator network and then stitch the generator and discriminator network into the GAN network. The GAN network is then used to train the generator parameters only, and the discriminator network is used to train the discriminator parameters: # define discriminator d_model = Sequential() d_model.add(Dense(units=d_n_neurons[0], input_shape=(n_x,), name='d_0' )) d_model.add(LeakyReLU()) d_model.add(Dropout(0.3)) for i in range(1,d_n_layers): d_model.add(Dense(units=d_n_neurons[i], name='d_{}'.format(i) )) d_model.add(LeakyReLU()) d_model.add(Dropout(0.3)) d_model.add(Dense(units=1, activation='sigmoid',name='d_out')) print('Discriminator:') d_model.summary() d_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.SGD(lr=d_learning_rate) ) This is what the discriminator models look: Discriminator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= d_0 (Dense) (None, 256) 200960 _________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 256) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 256) 0 _________________________________________________________________ d_out (Dense) (None, 1) 257 ================================================================= Total params: 201,217 Trainable params: 201,217 Non-trainable params: 0 _________________________________________________________________ Next, define the GAN Network, and turn the trainable property of the discriminator model to false, since GAN would only be used to train the generator: # define GAN network d_model.trainable=False z_in = Input(shape=(n_z,),name='z_in') x_in = g_model(z_in) gan_out = d_model(x_in) gan_model = Model(inputs=z_in,outputs=gan_out,name='gan') print('GAN:') gan_model.summary() gan_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=g_learning_rate) ) This is what the GAN model looks: GAN: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= z_in (InputLayer) (None, 256) 0 _________________________________________________________________ sequential_1 (Sequential) (None, 784) 1526288 _________________________________________________________________ sequential_2 (Sequential) (None, 1) 201217 ================================================================= Total params: 1,727,505 Trainable params: 1,526,288 Non-trainable params: 201,217 _________________________________________________________________  Great, now that we have defined the three models, we have to train the models. The training is as per the following algorithm: For each epoch: For each batch: get real images x_batch generate noise z_batch generate images g_batch using generator model combine g_batch and x_batch into x_in and create labels y_out set discriminator model as trainable train discriminator using x_in and y_out generate noise z_batch set x_in = z_batch and labels y_out = 1 set discriminator model as non-trainable train gan model using x_in and y_out, (effectively training generator model) For setting the labels, we apply the labels as 0.9 and 0.1 for real and fake images respectively. Generally, it is suggested that you use label smoothing by picking a random value from 0.0 to 0.3 for fake data and 0.8 to 1.0 for real data. Here is the complete code for training from the notebook: n_epochs = 400 batch_size = 100 n_batches = int(mnist.train.num_examples / batch_size) n_epochs_print = 50 for epoch in range(n_epochs+1): epoch_d_loss = 0.0 epoch_g_loss = 0.0 for batch in range(n_batches): x_batch, _ = mnist.train.next_batch(batch_size) x_batch = norm(x_batch) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) g_batch = g_model.predict(z_batch) x_in = np.concatenate([x_batch,g_batch]) y_out = np.ones(batch_size*2) y_out[:batch_size]=0.9 y_out[batch_size:]=0.1 d_model.trainable=True batch_d_loss = d_model.train_on_batch(x_in,y_out) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) x_in=z_batch y_out = np.ones(batch_size) d_model.trainable=False batch_g_loss = gan_model.train_on_batch(x_in,y_out) epoch_d_loss += batch_d_loss epoch_g_loss += batch_g_loss if epoch%n_epochs_print == 0: average_d_loss = epoch_d_loss / n_batches average_g_loss = epoch_g_loss / n_batches print('epoch: {0:04d} d_loss = {1:0.6f} g_loss = {2:0.6f}' .format(epoch,average_d_loss,average_g_loss)) # predict images using generator model trained x_pred = g_model.predict(z_test) display_images(x_pred.reshape(-1,pixel_size,pixel_size)) We printed the results every 50 epochs, up to 350 epochs: The model slowly learns to generate good quality images of handwritten digits from the random noise. There are so many variations of the GANs that it will take another book to cover all the different kinds of GANs. However, the implementation techniques are almost similar to what we have shown here. Deep Convolutional GAN with TensorFlow and Keras In DCGAN, both the discriminator and generator are implemented using a Deep Convolutional Network: 1.  In this example, we decided to implement the generator as the following network: Generator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= g_in (Dense) (None, 3200) 822400 _________________________________________________________________ g_in_act (Activation) (None, 3200) 0 _________________________________________________________________ g_in_reshape (Reshape) (None, 5, 5, 128) 0 _________________________________________________________________ g_0_up2d (UpSampling2D) (None, 10, 10, 128) 0 _________________________________________________________________ g_0_conv2d (Conv2D) (None, 10, 10, 64) 204864 _________________________________________________________________ g_0_act (Activation) (None, 10, 10, 64) 0 _________________________________________________________________ g_1_up2d (UpSampling2D) (None, 20, 20, 64) 0 _________________________________________________________________ g_1_conv2d (Conv2D) (None, 20, 20, 32) 51232 _________________________________________________________________ g_1_act (Activation) (None, 20, 20, 32) 0 _________________________________________________________________ g_2_up2d (UpSampling2D) (None, 40, 40, 32) 0 _________________________________________________________________ g_2_conv2d (Conv2D) (None, 40, 40, 16) 12816 _________________________________________________________________ g_2_act (Activation) (None, 40, 40, 16) 0 _________________________________________________________________ g_out_flatten (Flatten) (None, 25600) 0 _________________________________________________________________ g_out (Dense) (None, 784) 20071184 ================================================================= Total params: 21,162,496 Trainable params: 21,162,496 Non-trainable params: 0 The generator is a stronger network having three convolutional layers followed by tanh activation. We define the discriminator network as follows: Discriminator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= d_0_reshape (Reshape) (None, 28, 28, 1) 0 _________________________________________________________________ d_0_conv2d (Conv2D) (None, 28, 28, 64) 1664 _________________________________________________________________ d_0_act (Activation) (None, 28, 28, 64) 0 _________________________________________________________________ d_0_maxpool (MaxPooling2D) (None, 14, 14, 64) 0 _________________________________________________________________ d_out_flatten (Flatten) (None, 12544) 0 _________________________________________________________________ d_out (Dense) (None, 1) 12545 ================================================================= Total params: 14,209 Trainable params: 14,209 Non-trainable params: 0 _________________________________________________________________  The GAN network is composed of the discriminator and generator as demonstrated previously: GAN: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= z_in (InputLayer) (None, 256) 0 _________________________________________________________________ g (Sequential) (None, 784) 21162496 _________________________________________________________________ d (Sequential) (None, 1) 14209 ================================================================= Total params: 21,176,705 Trainable params: 21,162,496 Non-trainable params: 14,209 _________________________________________________________________ When we run this model for 400 epochs, we get the following output: As you can see, the DCGAN is able to generate high-quality digits starting from epoch 100 itself. The DGCAN has been used for style transfer, generation of images and titles and for image algebra, namely taking parts of one image and adding that to parts of another image. We built a simple GAN in TensorFlow and Keras and applied it to generate images from the MNIST dataset. We also built a DCGAN where the generator and discriminator consisted of convolutional networks. Do check out the book Mastering TensorFlow 1.x  to explore advanced features of TensorFlow 1.x and obtain in-depth knowledge of TensorFlow for solving artificial intelligence problems. 5 reasons to learn Generative Adversarial Networks (GANs) in 2018 Implementing a simple Generative Adversarial Network (GANs) Getting to know Generative Models and their types
Read more
  • 0
  • 0
  • 39677

article-image-integrate-firebase-android-ios-applications
Savia Lobo
28 May 2018
26 min read
Save for later

How to integrate Firebase on Android/iOS applications natively

Savia Lobo
28 May 2018
26 min read
In this tutorial, you'll see Firebase integration within a native context, basically over an iOS and Android application. You will also implement some of the basic, as well as advanced features, that are found in any modern mobile application in both, Android and iOS ecosystems. So let's get busy! This article is an excerpt taken from the book,' Firebase Cookbook', written by Houssem Yahiaoui. Implement the pushing and retrieving of data from Firebase Real-time Database We're going to start first with Android and see how we can manage this feature: First, head to your Android Studio project. Now that you have opened your project, let's move on to integrating the Real-time Database. In your project, head to the Menu bar, navigate to Tools | Firebase, and then select Realtime Database. Now click Save and retrieve data. Since we've already connected our Android application to Firebase, let's now add the Firebase Real-time Database dependencies locally by clicking on the Add the Realtime Database to your app button. This will give you a screen that looks like the following screenshot:  Figure 1: Android Studio  Firebase integration section Click on the Accept Changes button and the gradle will add these new dependencies to your gradle file and download and build the project. Now we've created this simple wish list application. It might not be the most visually pleasing but will serve us well in this experiment with TextEdit, a Button, and a ListView. So, in our experiment we want to do the following: Add a new wish to our wish list Firebase Database See the wishes underneath our ListView Let's start with adding that list of data to our Firebase. Now head to your MainActivity.java file of any other activity related to your project and add the following code: //[*] UI reference. EditText wishListText; Button addToWishList; ListView wishListview; // [*] Getting a reference to the Database Root. DatabaseReference fRootRef = FirebaseDatabase.getInstance().getReference(); //[*] Getting a reference to the wishes list. DatabaseReference wishesRef = fRootRef.child("wishes"); protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //[*] UI elements wishListText = (EditText) findViewById(R.id.wishListText); addToWishList = (Button) findViewById(R.id.addWishBtn); wishListview = (ListView) findViewById(R.id.wishsList); } @Override protected void onStart() { super.onStart(); //[*] Listening on Button click event addToWishList.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //[*] Getting the text from our EditText UI Element. String wish = wishListText.getText().toString(); //[*] Pushing the Data to our Database. wishesRef.push().setValue(wish); AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("Success"); alertDialog.setMessage("wish was added to Firebase"); alertDialog.show(); } }); } In the preceding code, we're doing the following: Getting a reference to our UI elements Since everything in Firebase starts with a reference, we're grabbing ourselves a reference to the root element in our database We are getting another reference to the wishes child method from the root reference Over the OnCreate() method, we are binding all the UI-based references to the actual UI widgets Over the OnStart() method, we're doing the following: Listening to the button click event and grabbing the EditText content Using the wishesRef.push().setValue() method to push the content of the EditText automatically to Firebase, then we are displaying a simple AlertDialog as the UI preferences However, the preceding code is not going to work. This is strange since everything is well configured, but the problem here is that the Firebase Database is secured out of the box with authorization rules. So, head to Database | RULES and change the rules there, and then publish. After that is done, the result will look similar to the following screenshot:  Figure 2: Firebase Real-time Database authorization section After saving and launching the application, the pushed data result will look like this: Figure 3: Firebase Real-time Database after adding a new wish to the wishes collection Firebase creates the child element in case you didn't create it yourself. This is great because we can create and implement any data structure we want, however, we want. Next, let's see how we can retrieve the data we sent. Move back to your onStart() method and add the following code lines: wishesRef.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { //[*] Grabbing the data Snapshot String newWish = dataSnapshot.getValue(String.class); wishes.add(newWish); adapter.notifyDataSetChanged(); } @Override public void onChildChanged(DataSnapshot dataSnapshot, String s) {} @Override public void onChildRemoved(DataSnapshot dataSnapshot) {} @Override public void onChildMoved(DataSnapshot dataSnapshot, String s) {} @Override public void onCancelled(DatabaseError databaseError) {} }); Before you implement the preceding code, go to the onCreate() method and add the following line underneath the UI widget reference: //[*] Adding an adapter. adapter = new ArrayAdapter<String>(this, R.layout.support_simple_spinner_dropdown_item, wishes); //[*] Wiring the Adapter wishListview.setAdapter(adapter);  Preceding that, in the variable declaration, simply add the following declaration: ArrayList<String> wishes = new ArrayList<String>(); ArrayAdapter<String> adapter; So, in the preceding code, we're doing the following: Adding a new ArrayList and an adapter for ListView changes. We're wiring everything in the onCreate() method. Wiring an addChildEventListener() in the wishes Firebase reference. Grabbing the data snapshot from the Firebase Real-time Database that is going to be fired whenever we add a new wish, and then wiring the list adapter to notify the wishListview which is going to update our Listview content automatically. Congratulations! You've just wired and exploited the Real-time Database functionality and created your very own wishes tracker. Now, let's see how we can create our very own iOS wishes tracker application using nothing but Swift and Firebase: Head directly to and fire up Xcode, and let's open up the project, where we integrated Firebase. Let's work on the feature. Edit your Podfile and add the following line: pod 'Firebase/Database' This will download and install the Firebase Database dependencies locally, in your very own awesome wishes tracker application. There are two view controllers, one for the wishes table and the other one for adding a new wish to the wishes list, the following represents the main wishes list view.   Figure 4: iOS application wishes list view Once we click on the + sign button in the Header, we'll be navigated with a segueway to a new ViewModal, where we have a text field where we can add our new wish and a button to push it to our list.         Figure 5: Wishes iOS application, in new wish ViewModel Over  addNewWishesViewController.swift, which is the view controller for adding the new wish view, after adding the necessary UITextField, @IBOutlet and the button @IBAction, replace the autogenerated content with the following code lines: import UIKit import FirebaseDatabase class newWishViewController: UIViewController { @IBOutlet weak var wishText: UITextField //[*] Adding the Firebase Database Reference var ref: FIRDatabaseReference? override func viewDidLoad() { super.viewDidLoad() ref = FIRDatabase.database().reference() } @IBAction func addNewWish(_ sender: Any) { let newWish = wishText.text // [*] Getting the UITextField content. self.ref?.child("wishes").childByAutoId().setValue( newWish!) presentedViewController?.dismiss(animated: true, completion:nil) } } In the preceding code, besides the self-explanatory UI element code, we're doing the following: We're using the FIRDatabaseReference and creating a new Firebase reference, and we're initializing it with viewDidLoad(). Within the addNewWish IBAction (function), we're getting the text from the UITextField, calling for the "wishes" child, then we're calling childByAutoId(), which will create an automatic id for our data (consider it a push function, if you're coming from JavaScript). We're simply setting the value to whatever we're going to get from the TextField. Finally, we're dismissing the current ViewController and going back to the TableViewController which holds all our wishes. Implementing anonymous authentication Authentication is one of the most tricky, time-consuming and tedious tasks in any web application. and of course, maintaining the best practices while doing so is truly a hard job to maintain. For mobiles, it's even more complex, because if you're using any traditional application it will mean that you're going to create a REST endpoint, an endpoint that will take an email and password and return either a session or a token, or directly a user's profile information. In Firebase, things are a bit different and in this recipe, we're going to see how we can use anonymous authentication—we will explain that in a second. You might wonder, but why? The why is quite simple: to give users an anonymous temporal, to protect data and to give users an extra taste of your application's inner soul. So let's see how we can make that happen. How to do it... We will first see how we can implement anonymous authentication in Android: Fire up your Android Studio. Before doing anything, we need to get some dependencies first, speaking, of course, of the Firebase Auth library that can be downloaded by adding this line to the build.gradle file under the dependencies section: compile 'com.google.firebase:firebase-auth:11.0.2' Now simply Sync and you will be good to start adding Firebase Authentication logic. Let us see what we're going to get as a final result: Figure 6: Android application: anonymous login application A simple UI with a button and a TextView, where we put our user data after a successful authentication process. Here's the code for that simple UI: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/ apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.hcodex.anonlogin.MainActivity"> <Button android:id="@+id/anonLoginBtn" android:layout_width="289dp" android:layout_height="50dp" android:text="Anounymous Login" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginTop="47dp" android:onClick="anonLoginBtn" app:layout_constraintTop_toBottomOf= "@+id/textView2" app:layout_constraintHorizontal_bias="0.506" android:layout_marginStart="8dp" android:layout_marginEnd="8dp" /> <TextView android:id="@+id/textView2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Firebase Anonymous Login" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginTop="80dp" /> <TextView android:id="@+id/textView3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Profile Data" android:layout_marginTop="64dp" app:layout_constraintTop_toBottomOf= "@+id/anonLoginBtn" android:layout_marginLeft="156dp" app:layout_constraintLeft_toLeftOf="parent" /> <TextView android:id="@+id/profileData" android:layout_width="349dp" android:layout_height="175dp" android:layout_marginBottom="28dp" android:layout_marginEnd="8dp" android:layout_marginLeft="8dp" android:layout_marginRight="8dp" android:layout_marginStart="8dp" android:layout_marginTop="8dp" android:text="" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintHorizontal_bias="0.526" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toBottomOf= "@+id/textView3" /> </android.support.constraint.ConstraintLayout> Now, let's see how we can wire up our Java code: //[*] Step 1 : Defining Logic variables. FirebaseAuth anonAuth; FirebaseAuth.AuthStateListener authStateListener; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); anonAuth = FirebaseAuth.getInstance(); setContentView(R.layout.activity_main); }; //[*] Step 2: Listening on the Login button click event. public void anonLoginBtn(View view) { anonAuth.signInAnonymously() .addOnCompleteListener( this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if(!task.isSuccessful()) { updateUI(null); } else { FirebaseUser fUser = anonAuth.getCurrentUser(); Log.d("FIRE", fUser.getUid()); updateUI(fUser); } }); } } //[*] Step 3 : Getting UI Reference private void updateUI(FirebaseUser user) { profileData = (TextView) findViewById( R.id.profileData); profileData.append("Anonymous Profile Id : n" + user.getUid()); } Now, let's see how we can implement anonymous authentication on iOS: What we'll achieve in this test is the following : Figure 7: iOS application, anonymous login application   Before doing anything, we need to download and install the Firebase authentication dependency first. Head directly over to your Podfile and the following line: pod 'Firebase/Auth' Then simply save the file, and on your terminal, type the following command: ~> pod install This will download the needed dependency and configure our application accordingly. Now create a simple UI with a button and after configuring your UI button IBAction reference, let's add the following code: @IBAction func connectAnon(_ sender: Any) { Auth.auth().signInAnonymously() { (user, error) in if let anon = user?.isAnonymous { print("i'm connected anonymously here's my id (user?.uid)") } } } How it works... Let's digest the preceding code: We're defining some basic logic variables; we're taking basically a TextView, where we'll append our results and define the Firebase  anonAuth variable. It's of FirebaseAuth type, which is the starting point for any authentication strategy that we might use. Over onCreate, we're initializing our Firebase reference and fixing our content view. We're going to authenticate our user by clicking a button bound with the anonLoginBtn() method. Within it, we're simply calling for the signInAnonymously() method, then if incomplete, we're testing if the authentication task is successful or not, else we're updating our TextEdit with the user information. We're using the updateUI method to simply update our TextField. Pretty simple steps. Now simply build and run your project and test your shiny new features. Implementing password authentication on iOS Email and password authentication is the most common way to authenticate anyone and it can be a major risk point if done wrong. Using Firebase will remove that risk and make you think of nothing but the UX that you will eventually provide to your users. In this recipe, we're going to see how you can do this on iOS. How to do it... Let's suppose you've created your awesome UI with all text fields and buttons and wired up the email and password IBOutlets and the IBAction login button. Let's see the code behind the awesome, quite simple password authentication process: import UIKit import Firebase import FirebaseAuth class EmailLoginViewController: UIViewController { @IBOutlet weak var emailField: UITextField! @IBOutlet weak var passwordField: UITextField! override func viewDidLoad() { super.viewDidLoad() } @IBAction func loginEmail(_ sender: Any) { if self.emailField.text</span> == "" || self.passwordField.text == "" { //[*] Prompt an Error let alertController = UIAlertController(title: "Error", message: "Please enter an email and password.", preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } else { FIRAuth.auth()?.signIn(withEmail: self.emailField.text!, password: self.passwordField.text!) { (user, error) in if error == nil { //[*] TODO: Navigate to Application Home Page. } else { //[*] Alert in case we've an error. let alertController = UIAlertController(title: "Error", message: error?.localizedDescription, preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } } } } } How it works ... Let's digest the preceding code: We're simply adding some IBOutlets and adding the IBAction login button. Over the loginEmail function, we're doing two things: If the user didn't provide any email/password, we're going to prompt them with an error alert indicating the necessity of having both fields. We're simply calling for the FIRAuth.auth().singIn() function, which in this case takes an Email and a Password. Then we're testing if we have any errors. Then, and only then, we might navigate to the app home screen or do anything else we want. We prompt them again with the Authentication Error message. And as simple as that, we're done. The User object will be transported, as well, so you may do any additional processing to the name, email, and much more. Implementing password authentication on Android To make things easier in terms of Android, we're going to use the awesome Firebase Auth UI. Using the Firebase Auth UI will save a lot of hassle when it comes to building the actual user interface and handling the different intent calls between the application activities. Let's see how we can integrate and use it for our needs. Let's start first by configuring our project and downloading all the necessary dependencies. Head to your build.gradle file and copy/paste the following entry: compile 'com.firebaseui:firebase-ui-auth:3.0.0' Now, simply sync and you will be good to start. How to do it... Now, let's see how we can make the functionality work: Declare the FirebaseAuth reference, plus add another variable that we will need later on: FirebaseAuth auth; private static final int RC_SIGN_IN = 17; Now, inside your onCreate method, add the following code: auth = FirebaseAuth.getInstance(); if(auth.getCurrentUser() != null) { Log.d("Auth", "Logged in successfully"); } else { startActivityForResult( AuthUI.getInstance() .createSignInIntentBuilder() .setAvailableProviders( Arrays.asList(new AuthUI.IdpConfig.Builder( AuthUI.EMAIL_PROVIDER).build())).build(), RC_SIGN_IN);findViewById(R.id.logoutBtn) .setOnClickListener(this); Now, in your activity, implement the View.OnClick listener. So your class will look like the following: public class MainActivity extends AppCompatActivity implements View.OnClickListener {} After that, implement the onClick function as shown here: @Override public void onClick(View v) { if(v.getId() == R.id.logoutBtn) { AuthUI.getInstance().signOut(this) .addOnCompleteListener( new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { Log.d("Auth", "Logged out successfully"); // TODO: make custom operation. } }); } } In the end, implement the onActivityResult method as shown in the following code block: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(requestCode == RC_SIGN_IN) { if(resultCode == RESULT_OK) { //User is in ! Log.d("Auth",auth.getCurrentUser().getEmail()); } else { //User is not authenticated Log.d("Auth", "Not Authenticated"); } } } Now build and run your project. You will have a similar interface to that shown in the following screenshot: Figure 8: Android authentication using email/password:  email picker This interface will be shown in case you're not authenticated and your application will list all the saved accounts on your device. If you click on the NONE OF THE ABOVE button, you will be prompted with the following interface: Figure 9: Android authentication email/password: adding new email When you add your email and click on the NEXT button, the API will go and look for that user with that email in your application's users. If such an email is present, you will be authenticated, but if it's not the case, you will be redirected to the Sign-up activity as shown in the following screenshot: Figure 10: Android authentication: creating a new account, with email/password/name Next, you will add your name and password. And with that, you will create a new account and you will be authenticated. How it works... From the preceding code, it's clear that we didn't create any user interface. The Firebase UI is so powerful, so let's explore what happens: The setAvailableProviders method will take a list of providers—those providers will be different based on your needs, so it can be any email provider, Google, Facebook, and each and every provider that Firebase supports. The main difference is that each and every provider will have each separate configuration and necessary dependencies that you will need to support the functionality. Also, if you've noticed, we're setting up a logout button. We created this button mainly to log out our users and added a click listener to it. The idea here is that when you click on it, the application performs the Sign-out operation. Then you add your custom intent that will vary from a redirect to closing the application. If you noticed, we're implementing the onActivityResult special function and this one will be our main listening point whenever we connect or disconnect from the application. Within it, we can perform different operations from resurrection to displaying toasts, to anything that you can think of. Implementing Google Sign-in authentication Google authentication is the process of logging in/creating an account using nothing but your existing Google account. It's easy, fast, and intuitive and removes a lot of hustle we face, usually when we register any web/mobile application. I'm talking basically about form filling. Using Firebase Google Sign-in authentication, we can manage such functionality; plus we have had the user basic metadata such as the display name, picture URL, and more. In this recipe, we're going to see how we can implement Google Sign-in functionality for both Android and iOS. Before doing any coding, it's important to do some basic configuration in our Firebase Project console. Head directly to your Firebase project Console | Authentication | SIGN-IN METHOD | Google and simply activate the switch and follow the instructions there in order to get the client. Please notice that Google Sign-in is automatically configured for iOS, but for Android, we will need to do some custom configuration. Let us first look at getting ready for Android to implement Google Sign-in authentication: Before we start implementing the authentication functionality, we will need to install some dependencies first, so please head to your build.gradle file and paste the following, and then sync your build: compile 'com.google.firebase:firebase-auth:11.4.2' compile 'com.google.android.gms:play-services- auth:11.4.2' The dependency versions are dependable, and that means that whenever you want to install them, you will have to provide the same version for both dependencies. Moving on to getting ready  in iOS for implementation of Google Sign-in authentication: In iOS, we will need to install a couple of dependencies, so please go and edit your Podfile and add the following lines underneath your already present dependencies, if you have any: pod 'Firebase/Auth' pod 'GoogleSignIn' Now, in your terminal, type the following command: ~> pod install This command will install the required dependencies and configure your project accordingly. How to do it... First, let us take a look at how we will implement this recipe in Android: Now, after installing our dependencies, we will need to create the UI for our calls. To do that, simply copy and paste the following special button XML code into your layout: <com.google.android.gms.common.SignInButton android:id="@+id/gbtn" android:layout_width="368dp" android:layout_height="wrap_content" android:layout_marginLeft="16dp" android:layout_marginTop="30dp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginRight="16dp" app:layout_constraintRight_toRightOf="parent" /> The result will be this: Figure 11: Google Sign-in button after the declaration After doing that, let's see the code behind it: SignInButton gBtn; FirebaseAuth mAuth; GoogleApiClient mGoogleApiClient; private final static int RC_SIGN_IN = 3; FirebaseAuth.AuthStateListener mAuthListener; @Override protected void onStart() { super.onStart(); mAuth.addAuthStateListener(mAuthListener); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mAuth = FirebaseAuth.getInstance(); gBtn = (SignInButton) findViewById(R.id.gbtn); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { signIn(); } }); mAuthListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { if(firebaseAuth.getCurrentUser() != null) { AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("User"); alertDialog.setMessage("I have a user loged in"); alertDialog.show(); } } }; mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this, new GoogleApiClient.OnConnectionFailedListener() { @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { Toast.makeText(MainActivity.this, "Something went wrong", Toast.LENGTH_SHORT).show(); } }) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build(); } GoogleSignInOptions gso = new GoogleSignInOptions.Builder( GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .build(); private void signIn() { Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent( mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi .getSignInResultFromIntent(data); if (result.isSuccess()) { // Google Sign In was successful, authenticate with Firebase GoogleSignInAccount account = result.getSignInAccount(); firebaseAuthWithGoogle(account); } else { Toast.makeText(MainActivity.this, "Connection Error", Toast.LENGTH_SHORT).show(); } } } private void firebaseAuthWithGoogle( GoogleSignInAccount account) { AuthCredential credential = GoogleAuthProvider.getCredential( account.getIdToken(), null); mAuth.signInWithCredential(credential) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if (task.isSuccessful()) { // Sign in success, update UI with the signed-in user's information Log.d("TAG", "signInWithCredential:success"); FirebaseUser user = mAuth.getCurrentUser(); Log.d("TAG", user.getDisplayName()); } else { Log.w("TAG", "signInWithCredential:failure", task.getException()); Toast.makeText(MainActivity.this, "Authentication failed.", Toast.LENGTH_SHORT) .show(); } // ... } }); } Then, simply build and launch your application, click on the authentication button, and you will be greeted with the following screen: Figure 12: Account picker, after clicking on Google Sign-in button. Next, simply pick the account you want to connect with, and then you will be greeted with an alert, finishing the authentication process. Now we will take a look at an implementation of our recipe in iOS: Before we do anything, let's import the Google Sign-in as follows: import GoogleSignIn After that, let's add our Google Sign-in button; to do so, go to your Login Page ViewController and add the following line of code: //Google sign in let googleBtn = GIDSignInButton() googleBtn.frame =CGRect(x: 16, y: 50, width: view.frame.width - 32, height: 50) view.addSubview(googleBtn) GIDSignIn.sharedInstance().uiDelegate = self The frame positioning is for my own needs—you can use it or modify the dimension to suit your application needs. Now, after adding the lines above, we will get an error. This is due to our ViewController not working well with the GIDSignInUIDelegate, so in order to make our xCode happier, let's add it to our ViewModal declaration so it looks like the following: class ViewController: UIViewController, FBSDKLoginButtonDelegate, GIDSignInUIDelegate {} Now, if you build and run your project, you will get the following: Figure 13: iOS application after configuring the Google Sign-in button Now, if you click on the Sign in button, you will get an exception. The reason for that is that the Sign in button is asking for the clientID, so to fix that, go to your AppDelegate file and complete the following import: import GoogleSignIn Next, add the following line of code within the application: didFinishLaunchingWithOptions as shown below: GIDSignIn.sharedInstance().clientID = FirebaseApp.app()?.options.clientID If you build and run the application now, then click on the Sign in button, nothing will happen. Why? Because iOS doesn't know how and where to navigate to next. So now, in order to fix that issue, go to your GoogleService-Info.plist file, copy the value of the REVERSED_CLIENT_ID, then go to your project configuration. Inside the Info section, scroll down to the URL types, add a new URL type, and paste the link inside the URL Schemes field: Figure 14: Xcode Firebase URL schema adding, to finish the Google Sign-in behavior Next, within the application: open URL options, add the following line: GIDSignIn.sharedInstance().handle(url, sourceApplication:options[ UIApplicationOpenURLOptionsKey.sourceApplication] as? String, annotation: options[UIApplicationOpenURLOptionsKey.annotation]) This will simply help the transition to the URL we already specified within the URL schemes. Next, if you build and run your application, tap on the Sign in button and you will be redirected using the SafariWebViewController to the Google Sign-in page, as shown in the following screenshot: Figure 15: iOS Google account picker after clicking on Sign-in button With that, the ongoing authentication process is done, but what will happen when you select your account and authorize the application? Typically, you need to go back to your application with all the needed profile information, don't you? isn't? Well, for now, it's not the case, so let's fix that. Go back to the AppDelegate file and do the following: Add the GIDSignInDelegate to the app delegate declaration Add the following line to the application: didFinishLaunchingWithOptions: GIDSignIn.sharedInstance().delegate = self This will simply let us go back to the application with all the tokens we need to finish the authentication process with Firebase. Next, we need to implement the signIn function that belongs to the GIDSignInDelegate; this function will be called once we're successfully authenticated: func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) { if let err = error { print("Can't connect to Google") return } print("we're using google sign in", user) } Now, once you're fully authenticated, you will receive the success message over your terminal. Now we can simply integrate our Firebase authentication logic. Complete the following import: import FirebaseAuth  Next, inside the same signIn function, add the following: guard let authentication = user.authentication else { return } let credential = GoogleAuthProvider.credential(withIDToken: authentication.idToken, accessToken: authentication.accessToken) Auth.auth().signIn(with: credential, completion: {(user, error) in if let error = error { print("[*] Can't connect to firebase, with error :", error) } print("we have a user", user?.displayName) }) This code will use the successfully logged in user token and call the Firebase Authentication logic to create a new Firebase user. Now we can retrieve the basic profile information that Firebase delivers. How it works... Let's explain what we did in the Android section: We activated authentication using our Google account from the Firebase project console. We also installed the required dependencies, from Firebase Auth to Google services. After finishing the setup, we gained the ability to create that awesome Google Sign-in special button, and we also gave it an ID for easy access. We created references from SignInButton and FirebaseAuth. Let's now explain what we just did in the iOS section: We used the GIDSignButton in order to create the branded Google Sign-in button, and we added it to our ViewController. Inside the AppDelegate, we made a couple of configurations so we could retrieve our ClientID that the button needed to connect to our application. For our button to work, we used the information stored in GoogleService-Info.plist and created an app link within our application so we could navigate to our connection page. Once everything was set, we were introduced to our application authorization page where we authorized the application and chose the account we wanted to use to connect. In order to get back all the required tokens and account information, we needed to go back to the AppDelegate file and implement the GIDSignInDelegate. Within it, we could can all the account-related tokens and information, once we were successful authenticated. Within the implemented SignIn function, we injected our regular Firebase authentication signIn method with all necessary tokens and information. When we built and ran the application again and signed in, we found the account used to authenticate, present in the Firebase authenticated account. To summarize, we learned how to integrate Firebase within a native context, basically over an iOS and Android application. If you've enjoyed reading this, do check,'Firebase Cookbook' for recipes to help you understand features of Firebase and implement them in your existing web or mobile applications. Using the Firebase Real-Time Database How to integrate Firebase with NativeScript for cross-platform app development Build powerful progressive web apps with Firebase  
Read more
  • 0
  • 0
  • 27478

article-image-optimize-mysql-8-servers-clients
Amey Varangaonkar
28 May 2018
11 min read
Save for later

How to optimize MySQL 8 servers and clients

Amey Varangaonkar
28 May 2018
11 min read
Our article focuses on optimization for MySQL 8 database servers and clients, we start with optimizing the server, followed by optimizing MySQL 8 client-side entities. It is more relevant to database administrators, to ensure performance and scalability across multiple servers. It would also help developers prepare scripts (which includes setting up the database) and users run MySQL for development and testing to maximize the productivity. [box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, written by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. In this book, authors have presented hands-on techniques for tackling the common and not-so-common issues when it comes to the different administration-related tasks in MySQL 8.[/box] Optimizing disk I/O There are quite a few ways to configure storage devices to devote more and faster storage hardware to the database server. A major performance bottleneck is disk seeking (finding the correct place on the disk to read or write content). When the amount of data grows large enough to make caching impossible, the problem with disk seeds becomes apparent. We need at least one disk seek operation to read, and several disk seek operations to write things in large databases where the data access is done more or less randomly. We should regulate or minimize the disk seek times using appropriate disks. In order to resolve the disk seek performance issue, increasing the number of available disk spindles, symlinking the files to different disks, or stripping disks can be done. The following are the details: Using symbolic links: When using symbolic links, we can create a Unix symbolic links for index and data files. The symlink points from default locations in the data directory to another disk in the case of MyISAM tables. These links may also be striped. This improves the seek and read times. The assumption is that the disk is not used concurrently for other purposes. Symbolic links are not supported for InnoDB tables. However, we can place InnoDB data and log files on different physical disks. Striping: In striping, we have many disks. We put the first block on the first disk, the second block on the second disk, and so on. The N block on the (N % number of-disks) disk. If the stripe size is perfectly aligned, the normal data size will be less than the stripe size. This will help to improve the performance. Striping is dependent on the stripe size and the operating system. In an ideal case, we would benchmark the application with different stripe sizes. The speed difference while striping depends on the parameters we have used, like stripe size. The difference in performance also depends on the number of disks. We have to choose if we want to optimize for random access or sequential access. To gain reliability, we may decide to set up with striping and mirroring (RAID 0+1). RAID stands for Redundant Array of Independent Drives. This approach needs 2 x N drives to hold N drives of data. With a good volume management software, we can manage this setup efficiently. There is another approach to it, as well. Depending on how critical the type of data is, we may vary the RAID level. For example, we can store really important data, such as host information and logs, on a RAID 0+1 or RAID N disk, whereas we can store semi-important data on a RAID 0 disk. In the case of RAID, parity bits are used to ensure the integrity of the data stored on each drive. So, RAID N becomes a problem if we have too many write operations to be performed. The time required to update the parity bits in this case is high. If it is not important to maintain when the file was last accessed, we can mount the file system with the -o noatime option. This option skips the updates on the file system, which reduces the disk seek time. We can also make the file system update asynchronously. Depending upon whether the file system supports it, we can set the -o async option. Using Network File System (NFS) with MySQL While using a Network File System (NFS), varying issues may occur, depending on the operating system and the NFS version. The following are the details: Data inconsistency is one issue with an NFS system. It may occur because of messages received out of order or lost network traffic. We can use TCP with hard and intr mount options to avoid these issues. MySQL data and log files may get locked and become unavailable for use if placed on NFS drives. If multiple instances of MySQL access the same data directory, it may result in locking issues. Improper shut down of MySQL or power outage are other reasons for filesystem locking issues. The latest version of NFS supports advisory and lease-based locking, which helps in addressing the locking issues. Still, it is not recommended to share a data directory among multiple MySQL instances. Maximum file size limitations must be understood to avoid any issues. With NFS 2, only the lower 2 GB of a file is accessible by clients. NFS 3 clients support larger files. The maximum file size depends on the local file system of the NFS server. Optimizing the use of memory In order to improve the performance of database operations, MySQL allocates buffers and caches memory. As a default, the MySQL server starts on a virtual machine (VM) with 512 MB of RAM. We can modify the default configuration for MySQL to run on limited memory systems. The following list describes the ways to optimize MySQL memory: The memory area which holds cached InnoDB data for tables, indexes, and other auxiliary buffers is known as the InnoDB buffer pool. The buffer pool is divided into pages. The pages hold multiple rows. The buffer pool is implemented as a linked list of pages for efficient cache management. Rarely used data is removed from the cache using an algorithm. Buffer pool size is an important factor for system performance. The innodb__buffer_pool_size system variable defines the buffer pool size. InnoDB allocates the entire buffer pool size at server startup. 50 to 75 percent of system memory is recommended for the buffer pool size. With MyISAM, all threads share the key buffer. The key_buffer_size system variable defines the size of the key buffer. The index file is opened once for each MyISAM table opened by the server. For each concurrent thread that accesses the table, the data file is opened once. A table structure, column structures for each column, and a 3 x N sized buffer are allocated for each concurrent thread. The MyISAM storage engine maintains an extra row buffer for internal use. The optimizer estimates the reading of multiple rows by scanning. The storage engine interface enables the optimizer to provide information about the recorded buffer size. The size of the buffer can vary depending on the size of the estimate. In order to take advantage of row pre-fetching, InnoDB uses a variable size buffering capability. It reduces the overhead of latching and B-tree navigation. Memory mapping can be enabled for all MyISAM tables by setting the myisam_use_mmap system variable to 1. The size of an in-memory temporary table can be defined by the tmp_table_size system variable. The maximum size of the heap table can be defined using the max_heap_table_size system variable. If the in-memory table becomes too large, MySQL automatically converts the table from in-memory to on-disk. The storage engine for an on-disk temporary table is defined by the internal_tmp_disk_storage_engine system variable. MySQL comes with the MySQL performance schema. It is a feature to monitor MySQL execution at low levels. The performance schema dynamically allocates memory by scaling its memory use to the actual server load, instead of allocating memory upon server startup. The memory, once allocated, is not freed until the server is restarted. Thread specific space is required for each thread that the server uses to manage client connections. The stack size is governed by the thread_stack system variable. The connection buffer is governed by the net_buffer_length system variable. A result buffer is governed by net_buffer_length. The connection buffer and result buffer starts with net_buffer_length bytes, but enlarges up to max_allowed_packets bytes, as needed. All threads share the same base memory. All join clauses are executed in a single pass. Most of the joins can be executed without a temporary table. Temporary tables are memory-based hash tables. Temporary tables that contain BLOB data and tables with large row lengths are stored on disk. A read buffer is allocated for each request, which performs a sequential scan on a table. The size of the read buffer is determined by the read_buffer_size system variable. MySQL closes all tables that are not in use at once when FLUSH TABLES or mysqladmin flush-table commands are executed. It marks all in-use tables to be closed when the current thread execution finishes. This frees in-use memory. FLUSH TABLES returns only after all tables have been closed. It is possible to monitor the MySQL performance schema and sys schema for memory usage. Before we can execute commands for this, we have to enable memory instruments on the MySQL performance schema. It can be done by updating the ENABLED column of the performance schema setup_instruments table. The following is the query to view available memory instruments in MySQL: mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%memory%'; This query will return hundreds of memory instruments. We can narrow it down by specifying a code area. The following is an example to limit results to InnoDB memory instruments: mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%memory/innodb%'; The following is the configuration to enable memory instruments: performance-schema-instrument='memory/%=COUNTED' The following is an example to query memory instrument data in the memory_summary_global_by_event_name table in the performance schema: mysql> SELECT * FROM performance_schema.memory_summary_global_by_event_name WHERE EVENT_NAME LIKE 'memory/innodb/buf_buf_pool'G; EVENT_NAME: memory/innodb/buf_buf_pool COUNT_ALLOC: 1 COUNT_FREE: 0 SUM_NUMBER_OF_BYTES_ALLOC: 137428992 SUM_NUMBER_OF_BYTES_FREE: 0 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 1 HIGH_COUNT_USED: 1 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 137428992 HIGH_NUMBER_OF_BYTES_USED: 137428992 It summarizes data by EVENT_NAME. The following is an example of querying the sys schema to aggregate currently allocated memory by code area: mysql> SELECT SUBSTRING_INDEX(event_name,'/',2) AS code_area, sys.format_bytes(SUM(current_alloc)) AS current_alloc FROM sys.x$memory_global_by_current_bytes GROUP BY SUBSTRING_INDEX(event_name,'/',2) ORDER BY SUM(current_alloc) DESC; Performance benchmarking We must consider the following factors when measuring performance: While measuring the speed of a single operation or a set of operations, it is important to simulate a scenario in the case of a heavy database workload for benchmarking In different environments, the test results may be different Depending on the workload, certain MySQL features may not help with performance MySQL 8 supports measuring the performance of individual statements. If we want to measure the speed of any SQL expression or function, the BENCHMARK() function is used. The following is the syntax for the function: BENCHMARK(loop_count, expression) The output of the BENCHMARK function is always zero. The speed can be measured by the line printed by MySQL in the output. The following is an example: mysql> select benchmark(1000000, 1+1); From the preceding example , we can find that the time taken to calculate 1+1 for 1000000 times is 0.15 seconds. Other aspects involved in optimizing MySQL servers and clients include optimizing locking operations, examining thread information and more. To know about these techniques, you may check out the book MySQL 8 Administrator’s Guide. SQL Server recovery models to effectively backup and restore your database Get SQL Server user management right 4 Encryption options for your SQL Server
Read more
  • 0
  • 0
  • 12438

article-image-testing-single-page-apps-spas-vue-js-dev-tools
Pravin Dhandre
25 May 2018
8 min read
Save for later

Testing Single Page Applications (SPAs) using Vue.js developer tools

Pravin Dhandre
25 May 2018
8 min read
Testing, especially for big applications, is paramount – especially when deploying your application to a development environment. Whether you choose unit testing or browser automation, there are a host of articles and books available on the subject. In this tutorial, we have covered the usage of Vue developer tools to test Single Page Applications. We will also touch upon other alternative tools like Nightwatch.js, Selenium, and TestCafe for testing. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example.  Using the Vue.js developer tools The Vue developer tools are available for Chrome and Firefox and can be downloaded from GitHub. Once installed, they become an extension of the browser developer tools. For example, in Chrome, they appear after the Audits tab. The Vue developer tools will only work when you are using Vue in development mode. By default, the un-minified version of Vue has the development mode enabled. However, if you are using the production version of the code, the development tools can be enabled by setting the devtools variable to true in your code: Vue.config.devtools = true We've been using the development version of Vue, so the dev tools should work with all three of the SPAs we have developed. Open the Dropbox example and open the Vue developer tools. Inspecting Vue components data and computed values The Vue developer tools give a great overview of the components in use on the page. You can also drill down into the components and preview the data in use on that particular instance. This is perfect for inspecting the properties of each component on the page at any given time. For example, if we inspect the Dropbox app and navigate to the Components tab, we can see the <Root> Vue instance and we can see the <DropboxViewer> component. Clicking this will reveal all of the data properties of the component – along with any computed properties. This lets us validate whether the structure is constructed correctly, along with the computed path property: Drilling down into each component, we can access individual data objects and computed properties. Using the Vue developer tools for inspecting your application is a much more efficient way of validating data while creating your app, as it saves having to place several console.log() statements. Viewing Vuex mutations and time-travel Navigating to the next tab, Vuex, allows us to watch store mutations taking place in real time. Every time a mutation is fired, a new line is created in the left-hand panel. This element allows us to view what data is being sent, and what the Vuex store looked like before and after the data had been committed. It also gives you several options to revert, commit, and time-travel to any point. Loading the Dropbox app, several structure mutations immediately populate within the left-hand panel, listing the mutation name and the time they occurred. This is the code pre-caching the folders in action. Clicking on each one will reveal the Vuex store state – along with a mutation containing the payload sent. The state display is after the payload has been sent and the mutation committed. To preview what the state looked like before that mutation, select the preceding option: On each entry, next to the mutation name, you will notice three symbols that allow you to carry out several actions and directly mutate the store in your browser: Commit this mutation: This allows you to commit all the data up to that point. This will remove all of the mutations from the dev tools and update the Base State to this point. This is handy if there are several mutations occurring that you wish to keep track of. Revert this mutation: This will undo the mutation and all mutations after this point. This allows you to carry out the same actions again and again without pressing refresh or losing your current place. For example, when adding a product to the basket in our shop app, a mutation occurs. Using this would allow you to remove the product from the basket and undo any following mutations without navigating away from the product page. Time-travel to this state: This allows you to preview the app and state at that particular mutation, without reverting any mutations that occur after the selected point. The mutations tab also allows you to commit or revert all mutations at the top of the left-hand panel. Within the right-hand panel, you can also import and export a JSON encoded version of the store's state. This is particularly handy when you want to re-test several circumstances and instances without having to reproduce several steps. Previewing event data The Events tab of the Vue developer tools works in a similar way to the Vuex tab, allowing you to inspect any events emitted throughout your app. Changing the filters in this app emits an event each time the filter type is updated, along with the filter query: The left-hand panel again lists the name of the event and the time it occurred. The right panel contains information about the event, including its component origin and payload. This data allows you to ensure the event data is as you expected it to be and, if not, helps you locate where the event is being triggered. The Vue dev tools are invaluable, especially as your JavaScript application gets bigger and more complex. Open the shop SPA we developed and inspect the various components and Vuex data to get an idea of how this tool can help you create applications that only commit mutations they need to and emit the events they have to. Testing your Single Page Application The majority of Vue testing suites revolve around having command-line knowledge and creating a Vue application using the CLI (command-line interface). Along with creating applications in frontend-compatible JavaScript, Vue also has a CLI that allows you to create applications using component-based files. These are files with a .vue extension and contain the template HTML along with the JavaScript required for the component. They also allow you to create scoped CSS – styles that only apply to that component. If you chose to create your app using the CLI, all of the theory and a lot of the practical knowledge you have learned in this book can easily be ported across. Command-line unit testing Along with component files, the Vue CLI allows you to integrate with command-line unit tests easier, such as Jest, Mocha, Chai, and TestCafe (https://testcafe.devexpress.com/). For example, TestCafe allows you to specify several different tests, including checking whether content exists, to clicking buttons to test functionality. An example of a TestCafe test checking to see if our filtering component in our first app contains the work Field would be: test('The filtering contains the word "filter"', async testController => { const filterSelector = await new Selector('body > #app > form > label:nth-child(1)'); await testController.expect(paragraphSelector.innerText).eql('Filter'); }); This test would then equate to true or false. Unit tests are generally written in conjunction with components themselves, allowing components to be reused and tested in isolation. This allows you to check that external factors have no bearing on the output of your tests. Most command-line JavaScript testing libraries will integrate with Vue.js; there is a great list available in the awesome Vue GitHub repository (https://github.com/vuejs/awesome-vue#test). Browser automation The alternative to using command-line unit testing is to automate your browser with a testing suite. This kind of testing is still triggered via the command line, but rather than integrating directly with your Vue application, it opens the page in the browser and interacts with it like a user would. A popular tool for doing this is Nightwatch.js (http://nightwatchjs.org/). You may use this suite for opening your shop and interacting with the filtering component or product list ordering and comparing the result. The tests are written in very colloquial English and are not restricted to being on the same domain name or file network as the site to be tested. The library is also language agnostic – working for any website regardless of what it is built with. The example Nightwatch.js gives on their website is for opening Google and ensuring the first result of a Google search for rembrandt van rijn is the Wikipedia entry: module.exports = { 'Demo test Google' : function (client) { client .url('http://www.google.com') .waitForElementVisible('body', 1000) .assert.title('Google') .assert.visible('input[type=text]') .setValue('input[type=text]', 'rembrandt van rijn') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('ol#rso li:first-child', 'Rembrandt - Wikipedia') .end(); } }; An alternative to Nightwatch is Selenium (http://www.seleniumhq.org/). Selenium has the advantage of having a Firefox extension that allows you to visually create tests and commands. We covered usage of Vue.js dev tools and learned to build automated tests for your web applications. If you found this tutorial useful, do check out the book Vue.js 2.x by Example and get complete knowledge resource on the process of building single-page applications with Vue.js. Building your first Vue.js 2 Web application 5 web development tools will matter in 2018
Read more
  • 0
  • 0
  • 32861
article-image-how-to-scaffold-a-new-module-in-odoo-11
Sugandha Lahoti
25 May 2018
2 min read
Save for later

How to Scaffold a New module in Odoo 11

Sugandha Lahoti
25 May 2018
2 min read
The latest version of Odoo ERP, Odoo 11, brings a plethora of features to Odoo targeting business application development. The market for Odoo is growing enormously and if you have thought about developing in Odoo, now is the best time to start. This hands-on video course, Odoo 11 Development Essentials, by Riste Kabranov, will help you get started with Odoo to build powerful applications. What is Scaffolding? With Scaffolding, you can automatically create a skeleton structure to simplify bootstrapping of new modules in Odoo. Since it’s an automatic process, you don’t need to spend efforts in setting up basic structures and look for starting requirements. Oddo has a scaffold command that creates the skeleton for a new module based on a template. By default, the new module is created in the current working directory, but we can provide a specific directory where to create the module, passing it as an additional parameter. A step-by-step guide to scaffold a new module in Odoo 11: Step 1 In the first step, you need to navigate to /opt/odoo/odoo and create a folder name custom_addons. Step 2 In the second step, you scaffold a new module into the custom_addons. For this, Locate odoo-bin Use ./odoo-bin scaffold module_name folder_name to scaffold a new empty module Check if the new module is there and consists all the files needed. Check out the video for a more detailed walkthrough! This video tutorial has been taken from Odoo 11 Development Essentials. To learn how to build and customize business applications with Odoo, buy the full video course. ERP tool in focus: Odoo 11 Building Your First Odoo Application Top 5 free Business Intelligence tools
Read more
  • 0
  • 0
  • 41854

article-image-create-configure-azure-virtual-machine
Gebin George
25 May 2018
13 min read
Save for later

How to create and configure an Azure Virtual Machine

Gebin George
25 May 2018
13 min read
Creating virtual machines on Azure gives you on-demand, high-scale, secure, virtualized infrastructure using Windows Server. Virtual machine helps you deploy and scale applications easily. In this article, we will learn how to run an Azure Virtual Machine. This tutorial is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly. This book will help you efficiently monitor, diagnose, and troubleshoot Azure Networking. Creating an Azure VM is a very straightforward process – all you have to do is follow the given steps: Navigate to the Azure portal and search for Virtual Machines, as shown in the following screenshot: Figure 3.1: Searching for Virtual Machines Once the VM blade is opened, you can click on +Add to create a new VM, as shown in the following screenshot: Figure 3.2: Virtual Machines blade Once you have clicked on +Add, a new blade will pop up where you have to search for and select the desired OS for the VM, as shown in the following screenshot: Figure 3.3: Searching for Windows Server 2016 OS for the VM Once the OS is selected, you need to select the deployment model, whether that be Resource Manager or Classic, as shown in the following screenshot: Figure 3.4: Selecting the deployment model Once the deployment model is selected, a new blade will pop up where you have to specify the following: Name: Specify the name of the VM. VM disk type: Specify whether the disk type will be SSD or HDD. Consider that SSD will offer consistent, low-latency performance, but will incur more charges. Note that this option is not available for the Classic model in this blade, but is available in the Configure optional features blade. User name: Specify the username that will be used to log on the VM. Password: Specify the password, which must be between 12 and 123 characters long and must contain three of the following: one lowercase character, one uppercase character, one number, and one special character that is not or -. Subscription: This specifies the subscription that will be charged for the VM usage. Resource group: This specifies the resource group within which the VM will exist. Location: Specify the location in which the VM will be created. It is recommended that you select the nearest location to you. Save money: Here, you specify whether you own Windows Server Licenses with active Software Assurance (SA). If you do, Azure Hybrid Benefit is recommended to save compute costs. For more information about Azure Hybrid Benefit, you can check this page. Figure 3.5: Configure the VM basic settings Once you have clicked on OK, a new blade will pop up where you have to specify the VM size so that the VMs series can select the one that will fulfil your needs, as shown in the following screenshot: Figure 3.6: Select the VM size Once the VM size has been specified, you need to specify the following settings: Availability set: This option provides High availability for the VM by setting the VMs within the same application and availability set. Here, the VMs will be in different fault and update domains, granting the VMs high availability (up to 99.95% of Azure SLA). Use managed disks: Enable this feature to have Azure automatically manage the availability of disks to provide data redundancy and fault tolerance without creating and managing storage accounts on your own. This setting is not available in the Classic model. Virtual network: Specify the virtual network to which you want to assign the VM. Subnet: Select the subnet within the virtual network that you specified earlier to assign the VM to. Public IP address: Either select an existing public IP address or create a new one. Network security group (firewall): Select the NSG you want to assign to the VM NIC. This is called endpoints in the Classic model. Extensions: You can add more features to the VM using extensions, such as configuration management, antivirus protection, and so on. Auto-shutdown: Specify whether you want to shut down your VM daily or not; if you do, you can set a schedule. Considering that this option will help you saving compute cost especially for dev and test scenarios. This is not available in the Classic model. Notification before shutdown: Check this if you enabled Auto-shutdown and want to subscribe for notifications before the VM shuts down. This is not available in the Classic model. Boot diagnostics: This captures serial console output and screenshots of the VM running on a host to help diagnose start up issues. Guest OS diagnostics: This obtains metrics for the VM every minute; you can use these metrics to create alerts and stay informed of your applications. Diagnostics storage account: This is where metrics are written, so you can analyze them with your own tools. Figure 3.7: Specify more settings for the VM Enabling Boot diagnostics and Guest diagnostics will incur more charges since the diagnostics will need a dedicated storage account to store their data. Finally, once you are done with its settings, Azure will validate those you have specified and summarize them, as shown in the following screenshot: Figure 3.8: VM Settings Summary Once clicked on, Create the VM will start the creation process, and within minutes the VM will be created. Once the VM is created, you can navigate to the Virtual Machines blade to open the VM that has been created, as shown in the following screenshot: Figure 3.9: The created VM overview To connect to the VM, click on Connect, where a pre-configured RDP file with the required VM information will be downloaded. Open the RDP file. You will be asked to enter the username and password you specified for the VM during its configuration, as shown in the following screenshot: Figure 3.10: Entering the VM credentials Voila! You should now be connected to the VM. Azure VMs networking There are many network configurations that can be done for the VM. You can add additional NICs, change the private IP address, set a private or public IP address to be either static or dynamic, and you can change the inbound and outbound security rules. Adding inbound and outbound rules Adding inbound and outbound security rules to the VM NIC is a very simple process; all you need to do is follow these steps: Navigate to the desired VM. Scroll down to Networking, under SETTINGS, as shown in the following screenshot: Figure 3.11: VM networking settings To add inbound and outbound security rules, you have to click on either Add inbound or Add outbound. Once clicked on, a new blade will pop up where you have to specify settings using the following fields: Service: The service specifies the destination protocol and port range for this rule. Here, you can choose a predefined service, such as RDP or SSH, or provide a custom port range. Port ranges: Here, you need to specify a single port, a port range, or a comma-separated list of single ports or port ranges. Priority: Here, you enter the desired priority value. Name: Specify a name for the rule here. Description: Write a description for the rule that relates to it here. Figure 3.12: Adding an inbound rule Once you have clicked OK, the rule will be applied. Note that the same process applies when adding an outbound rule. Adding an additional NIC to the VM Adding an additional NIC starts from the same blade as adding inbound and outbound rules. To add an additional NIC, you have to follow the given steps: Before adding an additional NIC to the VM, you need to make sure that the VM is in a Stopped (Deallocated) status. Navigate to Networking on the desired VM. Click on Attach network interface, and a new blade will pop up. Here, you have to either create a network interface or select an existing one. If you are selecting an existing interface, simply click on OK and you are done. If you are creating a new interface, click on Create network interface, as shown in the following screenshot: Figure 3.13: Attaching network interface A new blade will pop up where you have to specify the following: Name: The name of the new NIC. Virtual network: This field will be grayed out because you cannot attach a VM's NIC to different virtual networks. Subnet: Select the desired subnet within the virtual network. Private IP address assignment: Specify whether you want to allocate this IP dynamically or statically. Network security group: Specify an NSG to be assigned to this NIC. Private IP address (IPv6): If you want to assign an IPv6 to this NIC, check this setting. Subscription: This field will be grayed out because you cannot have a VM's NIC in a different subscription. Resource group: Specify the resource group to which the NIC will exist. Location: This field will be grayed out because you cannot have VM NICs in different locations. Figure 3.14: Specify the NIC settings Once you are done, click Create. Once the network interface is created, you will return to the previous blade. Here, you need to specify the NIC you just created and click on OK, as shown in the following screenshot: Figure 3.15: Attaching the NIC Configuring the NICs The Network Interface Cards (NICs) include some configuration that you might be interested in. They are as follows: To navigate to the desired NIC, you can search for the network interfaces blade, as shown in the following screenshot: Figure 3.16: Searching for network interfaces blade Then, the blade will pop up, from which you can select the desired NIC, as shown in the following screenshot: Figure 3.17: Select the desired NIC You can also navigate back to the VM via | Networking | and then click on the desired NIC, as shown in the following screenshot: Figure 3.18: The VM NIC To configure the NIC, you need to follow the given steps: Once the NIC blade is opened, navigate to IP configurations, as shown in the following screenshot: Figure 3.19: NIC blade overview To enable IP forwarding, click on Enabled and then click Save. Enabling this feature will cause the NIC to receive traffic that is not destined to its own IP address. Traffic will be sent with a different source IP. To add another IP to the NIC, click on Add, and a new blade will pop up, for which you have to specify the following: Name: The name of the IP. Type: This field will be grayed out because a primary IP already exists. Therefore, this one will be secondary. Allocation: Specify whether the allocation method is static or dynamic. IP address: Enter the static IP address that belongs to the same subnet that the NIC belongs to. If you have selected dynamic allocation, you cannot enter the IP address statically. Public IP address: Specify whether or not you need a public IP address for this IP configuration. If you do, you will be asked to configure the required settings. Figure 3.20: Configure the IP configuration settings Click on Configure required settings for the public IP address and a new blade will pop up from which you can select an existing public IP address or create a new one, as shown in the following screenshot: Figure 3.21: Create a new public IP address Click on OK and you will return to the blade, as shown in Figure 3.20, with the following warning: Figure 3.22: Warning for adding a new IP address In this case, you need to plan for the addition of a new IP address to ensure that the time the VM is restarted is not during working hours. Azure VNets considerations for Azure VMs Building VMs in Azure is a common task, but to do this task well, and to make it operate properly, you need to understand the considerations of Azure VNets for Azure VMs. These considerations are as follows: Azure VNets enable you to bring your own IPv4/IPv6 addresses and assign them to Azure VMs, statically or dynamically You do not have access to the role that acts as DHCP or provides IP addresses; you can only control the ranges you want to use in the form of address ranges and subnets Installing a DHCP role on one of the Azure VMs is currently unsupported; this is because Azure does not use traditional Layer-2 or Layer-3 topology, and instead uses Layer-3 traffic with tunneling to emulate a Layer-2 LAN Private IP addresses can be used for internal communication; external communication can be done via public IP addresses You can assign multiple private and public IP addresses to a single VM You can assign multiple NICs to a single VM By default, all the VMs within the same virtual network can communicate with each other, unless otherwise specified by an NSG on a subnet within this virtual network The network security group (NSG) can sometimes cause an overhead; without this overhead, however, all VMs within the same subnet would communicate with each other By default, an inbound security rule is created for remote desktops for Windows-based VMs, and SSH for Linux-based VMs The inbound security rules are first applied on the NSG of the subnet and then the VM NIC NSG – for example, if the subnet's NSG allows HTTP traffic, it will pass through it; however, it may not reach its destination if the VM NIC NSG does not allow it The outbound security rules are applied for the VM NIC NSG first, and then applied on the subnet NSG Multiple NICs assigned to a VM can exist in different subnets Azure VMs with multiple NICs in the same availability set do not have to have the same number of NICs, but the VMs must have at least two NICs When you attach an NIC to a VM, you need to ensure that they exist in the same location and subscription The NIC and the VNet must exist in the same subscription and location The NIC's MAC address cannot be changed until the VM to which the NIC is assigned is deleted Once the VM is created, you cannot change the VNet to which it is assigned; however, you can change the subnet to which the VM is assigned You cannot attach an existing NIC to a VM during its creation, but you can add an existing NIC as an additional NIC By default, a dynamic public IP address is assigned to the VM during creation, but this address will change if the VM is stopped or deleted; to ensure it will not change, you need to ensure its IP address is static In a multi-NIC VM, the NSG that is applied to one NIC does not affect the others If you found this post useful, do check out the book Hands-On Networking with Azure, to design and implement Azure Networking for Azure VMs. Read More Introducing Azure Sphere – A secure way of running your Internet of Things devices Learn Azure Serverless computing for free: Download e-book
Read more
  • 0
  • 0
  • 26931

article-image-digital-forensics-using-autopsy
Savia Lobo
24 May 2018
10 min read
Save for later

Getting started with Digital forensics using Autopsy

Savia Lobo
24 May 2018
10 min read
Digital forensics involves the preservation, acquisition, documentation, analysis, and interpretation of evidence from various storage media types. It is not only limited to laptops, desktops, tablets, and mobile devices but also extends to data in transit which is transmitted across public or private networks. In this tutorial, we will cover how one can carry out digital forensics with Autopsy. Autopsy is a digital forensics platform and graphical interface to the sleuth kit and other digital forensics tools. This article is an excerpt taken from the book, 'Digital Forensics with Kali Linux', written by Shiva V.N. Parasram. Let's proceed with the analysis using the Autopsy browser by first getting acquainted with the different ways to start Autopsy. Starting Autopsy Autopsy can be started in two ways. The first uses the Applications menu by clicking on Applications | 11 - Forensics | autopsy: Alternatively, we can click on the Show applications icon (last item in the side menu) and type autopsy into the search bar at the top-middle of the screen and then click on the autopsy icon: Once the autopsy icon is clicked, a new terminal is opened showing the program information along with connection details for opening The Autopsy Forensic Browser. In the following screenshot, we can see that the version number is listed as 2.24 with the path to the Evidence Locker folder as /var/lib/autopsy: To open the Autopsy browser, position the mouse over the link in the terminal, then right-click and choose Open Link, as seen in the following screenshot: Creating a new case To create a new case, follow the given steps: When the Autopsy Forensic Browser opens, investigators are presented with three options. Click on NEW CASE: Enter details for the Case Name, Description, and Investigator Names. For the Case Name, I've entered SP-8-dftt, as it closely matches the image name (8-jpeg-search.dd), which we will be using for this investigation. Once all information is entered, click NEW CASE: Several investigator name fields are available, as there may be instances where several investigators may be working together. The locations of the Case directory and Configuration file are displayed and shown as created.  It's important to take note of the case directory location, as seen in the screenshot: Case directory (/var/lib/autopsy/SP-8-dftt/) created. Click ADD HOST to continue: Enter the details for the Host Name (name of the computer being investigated) and the Description of the host. Optional settings: Time zone: Defaults to local settings, if not specified Timeskew Adjustment: Adds a value in seconds to compensate for time differences Path of Alert Hash Database: Specifies the path of a created database of known bad hashes Path of Ignore Hash Database: Specifies the path of a created database of known good hashes similar to the NIST NSRL: Click on the ADD HOST button to continue. Once the host is added and directories are created, we add the forensic image we want to analyze by clicking the ADD IMAGE button: Click on the ADD IMAGE FILE button to add the image file: To import the image for analysis, the full path must be specified. On my machine, I've saved the image file (8-jpeg-search.dd) to the Desktop folder. As such, the location of the file would be /root/Desktop/ 8-jpeg-search.dd. For the Import Method, we choose Symlink. This way the image file can be imported from its current location (Desktop) to the Evidence Locker without the risks associated with moving or copying the image file. If you are presented with the following error message, ensure that the specified image location is correct and that the forward slash (/) is used: Upon clicking Next, the Image File Details are displayed. To verify the integrity of the file, select the radio button for Calculate the hash value for this image, and select the checkbox next to Verify hash after importing? The File System Details section also shows that the image is of a ntfs partition. Click on the ADD button to continue: After clicking the ADD button in the previous screenshot, Autopsy calculates the MD5 hash and links the image into the evidence locker. Press OK to continue: At this point, we're just about ready to analyze the image file. If there are multiple cases listed in the gallery area from any previous investigations you may have worked on, be sure to choose the 8-jpeg-search.dd file and case: Before proceeding, we can click on the IMAGE DETAILS option. This screen gives detail such as the image name, volume ID, file format, file system, and also allows for the extraction of ASCII, Unicode, and unallocated data to enhance and provide faster keyword searches. Click on the back button in the browser to return to the previous menu and continue with the analysis: Before clicking on the ANALYZE button to start our investigation and analysis, we can also verify the integrity of the image by creating an MD5 hash, by clicking on the IMAGE INTEGRITY button: Several other options exist such as FILE ACTIVITY TIMELINES, HASH DATABASES, and so on. We can return to these at any point in the investigation. After clicking on the IMAGE INTEGRITY button, the image name and hash are displayed. Click on the VALIDATE button to validate the MD5 hash: The validation results are displayed in the lower-left corner of the Autopsy browser window: We can see that our validation was successful, with matching MD5 hashes displayed in the results. Click on the CLOSE button to continue. To begin our analysis, we click on the ANALYZE button: Analysis using Autopsy Now that we've created our case, added host information with appropriate directories, and added our acquired image, we get to the analysis stage. After clicking on the ANALYZE button (see the previous screenshot), we're presented with several options in the form of tabs, with which to begin our investigation: Let's look at the details of the image by clicking on the IMAGE DETAILS tab. In the following snippet, we can see the Volume Serial Number and the operating system (Version) listed as Windows XP: Next, we click on the FILE ANALYSIS tab. This mode opens into File Browsing Mode, which allows the examination of directories and files within the image. Directories within the image are listed by default in the main view area: In File Browsing Mode, directories are listed with the Current Directory specified as C:/. For each directory and file, there are fields showing when the item was WRITTEN, ACCESSED, CHANGED, and CREATED, along with its size and META data: WRITTEN: The date and time the file was last written to ACCESSED: The date and time the file was last accessed (only the date is accurate) CHANGED: The date and time the descriptive data of the file was modified CREATED: The data and time the file was created META: Metadata describing the file and information about the file: For integrity purposes, MD5 hashes of all files can be made by clicking on the GENERATE MD5 LIST OF FILES button. Investigators can also make notes about files, times, anomalies, and so on, by clicking on the ADD NOTE button: The left pane contains four main features that we will be using: Directory Seek: Allows for the searching of directories File Name Search: Allows for the searching of files by Perl expressions or filenames ALL DELETED FILES: Searches the image for deleted files EXPAND DIRECTORIES: Expands all directories for easier viewing of contents By clicking on EXPAND DIRECTORIES, all contents are easily viewable and accessible within the left pane and main window. The + next to a directory indicates that it can be further expanded to view subdirectories (++) and their contents: To view deleted files, we click on the ALL DELETED FILES button in the left pane. Deleted files are marked in red and also adhere to the same format of WRITTEN, ACCESSED, CHANGED, and CREATED times. From the following screenshot, we can see that the image contains two deleted files: We can also view more information about this file by clicking on its META entry. By viewing the metadata entries of a file (last column to the right), we can also view the hexadecimal entries for the file, which may give the true file extensions, even if the extension was changed. In the preceding screenshot, the second deleted file (file7.hmm) has a peculiar file extension of .hmm. Click on the META entry (31-128-3) to view the metadata: Under the Attributes section, click on the first cluster labelled 1066 to view header information of the file: We can see that the first entry is .JFIF, which is an abbreviation for JPEG File Interchange Format. This means that the file7.hmm file is an image file but had its extension changed to .hmm. Sorting files Inspecting the metadata of each file may not be practical with large evidence files. For such an instance, the FILE TYPE feature can be used. This feature allows for the examination of existing (allocated), deleted (unallocated), and hidden files. Click on the FILE TYPE tab to continue: Click Sort files into categories by type (leave the default-checked options as they are) and then click OK to begin the sorting process: Once sorting is complete, a results summary is displayed. In the following snippet, we can see that there are five Extension Mismatches: To view the sorted files, we must manually browse to the location of the output folder, as Autopsy 2.4 does not support viewing of sorted files. To reveal this location, click on View Sorted Files in the left pane: The output folder locations will vary depending on the information specified by the user when first creating the case, but can usually be found at /var/lib/autopsy/<case name>/<host name>/output/sorter-vol#/index.html. Once the index.html file has been opened, click on the Extension Mismatch link: The five listed files with mismatched extensions should be further examined by viewing metadata content, with notes added by the investigator. Reopening cases in Autopsy Cases are usually ongoing and can easily be restarted by starting Autopsy and clicking on OPEN CASE: In the CASE GALLERY, be sure to choose the correct case name and, from there, continue your examination: To recap, we looked at forensics using the Autopsy Forensic Browser with The Sleuth Kit. Compared to individual tools, Autopsy has case management features and supports various types of file analysis, searching, and sorting of allocated, unallocated, and hidden files. Autopsy can also perform hashing on a file and directory levels to maintain evidence integrity. If you enjoyed reading this article, do check out, 'Digital Forensics with Kali Linux' to take your forensic abilities and investigations to a professional level, catering to all aspects of a digital forensic investigation from hashing to reporting. What is Digital Forensics? IoT Forensics: Security in an always connected world where things talk Working with Forensic Evidence Container Recipes
Read more
  • 0
  • 0
  • 72212
article-image-what-matters-on-an-engineering-resume-hacker-rank-report-says-skills-not-certifications
Richard Gall
24 May 2018
6 min read
Save for later

What matters on an engineering resume? Hacker Rank report says skills, not certifications

Richard Gall
24 May 2018
6 min read
Putting together an engineering resume can be a real headache. What should you include? How can you best communicate your experience and skills? Software engineers are constantly under pressure to deliver new projects and fix problems while learning new skills. Documenting the complexity of developer life in a straightforward and marketable manner is a challenge to say the least. Luckily, hiring managers and tech recruiters today recognize just how difficult communicating skill and competency in an engineering resume can be. A report by Hacker Rank revealed that the things that feature on a resume aren't that highly valued by recruiters and hiring managers. However, skills does remain top of the agenda: the question, really, is about how we demonstrate and communicate those skills. The quality of your previous experience matters on an engineering resume Hacker Rank found that hiring managers and tech recruiters value previous experience over everything else. 77% of survey respondents said previous experience was one of the 3 most important qualifications before a formal interview. In second place was years of experience with 46%. The difference between the two is subtle but important; it's offers a useful takeaway for engineers creating an engineering resume. Essentially, the quality of your experience is more important than the quantity of your experience. You need to make sure you communicate the details of your employment experiences. It sounds obvious but it's worth stating: applying for an engineering job isn't just a competition based on who has the most experience. You should explain the nature of the projects you are working on. The skills you used are essential, but being clear about how the project supported wider strategic or tactical goals is also important. This demonstrates not only your skills, but also your contextual awareness. It suggests to a hiring manager or recruiter you not only have the competence, but that you are also a team player with commercial awareness. Certifications aren't that important on your resume One of the most interesting insights from the Hacker Rank report was that both hiring managers and recruiters don't really care about certifications any more. Less than 16% listed it as one of the 3 most important things they look at during the recruitment process. Does this mean, then, that the certification is well and truly over? At this stage, it's hard to tell. But it does point to a wider cultural change that probably has a lot to do with open source. Because change is built into the reality of open source software, certifications are never going to be able to keep up with what's new and important. The things you learned to pass one year will likely be out of date the next. It probably also says something about the nature of technical roles today. Years ago, engineers would start a job knowing what they were going to be using. The toolchains and tech stacks would be relatively stable and consistent. In this context, certification was like a license, proving you understood the various components of a given tool or suite of tools. But today, it's more important for engineers to prove that they are both adaptable and capable of solving a range of different problems. With that in mind, it's essential to demonstrate your flexibility on your engineering resume. Make it clear that you're able to learn new things quickly, and that you can adapt your skill set to the problems you need to solve. You don't need to look good on paper to get the job... but it's going to help Hacker Rank's research also revealed that 75% of recruiters and hiring managers have hired people they initially thought didn't look good on paper. But that doesn't necessarily mean you should stop working on your resume. If anything, what this shows is that if you get your resume right, you could really catch someone's attention. You need to consider everything in your resume. Traditional resumes have a pretty clear structure, whatever job you're applying for, but if Hacker Rank's research tells us anything, it's that a an engineering resume requires a slightly different approach. Personal projects are more important than your portfolio on an engineering resume A further insight from Hacker Rank's report suggests one way you might adopt a different approach to your resume. Responding to the same question as the one we looked at above, 37% said personal projects were one of the 3 most important factors in determining whether to invite a candidate to interview. By contrast, only 22% said portfolio. This seems strange - surely a portfolio offers a deeper insight into someone's professional experience. Personal projects are more like hobbies, right? Personal projects actually tell you so much more about a candidate than a portfolio. A portfolio is largely determined by the work you have been doing. What's more, it's not always that easy to communicate value in a portfolio. Equally, if you've been badly managed, or faced a lack of support, your portfolio might not actually be a good reflection of how good you really are. Personal projects give you an insight into how a person thinks. It shows recruiters what makes an engineer tick. In the workplace your scope for creativity and problem solving might well be limited. With personal projects you're free to test out ideas try new tools. You're able to experiment. So, when you're putting together an engineering resume, make sure you dedicate some time outlining your personal projects. Consider these sorts of questions: Why did you start a project? What did you find interesting? What did you learn? Engineering skills still matter Just because the traditional resume appears to have fallen out of favor, it doesn't mean your skills don't matter. In fact, skill matters more than ever. For a third of hiring managers, skill assessments are the area they want to invest in. This would allow them to judge a candidate's competencies and skills much more effectively than simply looking at a resume. As we've seen, things like personal projects are valuable because they demonstrate skills in a way that is often difficult. They not only prove you have the technical skills you say you have, they also provide a good indication of how you think and how you might approach solving problems. They can help illustrate how you deploy those skills. And when its so easy to learn how to write lines of code (no bad thing, true), showing how you think and apply your skills is a sure fire way to make sure you stand out from the crowd. Read next: How to assess your tech team's skills Are technical skills overrated when hiring tech pros?  ‘Soft’ Skills Every Data Pro Needs
Read more
  • 0
  • 0
  • 42946

article-image-build-progressive-web-apps-firebase
Savia Lobo
24 May 2018
19 min read
Save for later

Build powerful progressive web apps with Firebase

Savia Lobo
24 May 2018
19 min read
Progressive Web Apps describe a collection of technologies, design concepts, and Web APIs. They work in tandem to provide an app-like experience on the mobile web. In this tutorial, we'll discuss diverse recipes and show what it takes to turn any application into a powerful, fully-optimized progressive web application using Firebase magic. This article is an excerpt taken from the book,' Firebase Cookbook', written by Houssem Yahiaoui. In this book, you will learn how to create cross-platform mobile apps, integrate Firebase in native platforms, and learn how to monetize your mobile applications using Admob for Android and iOS. Integrating Node-FCM in NodeJS server We'll see how we can fully integrate the FCM (Firebase Cloud Messaging) with any Nodejs application. This process is relatively easy and straightforward, and we will see how it can be done in a matter of minutes. How to do it... Let's ensure that our working environment is ready for our project, so let's install a couple of dependencies in order to ensure that everything will run smoothly: Open your terminal--in case you are using macOS/Linux--or your cmd--if you're using Windows--and write down the following command: ~> npm install fcm-push --save By using this command, we are downloading and installing the fcm-push library locally. If this is executed, it will help with managing the process of sending web push notification to our users. The NodeJS ecosystem has npm as part of it. So in order to have it on your working machine, you will have to download NodeJS and configure your system within. Here's the official NodeJS link; download and install the suitable version to your system: https://nodejs.org/en/download/. Now that we have successfully installed the library locally, in order to integrate it with our applications and start using it, we will just need another line of code. Within your local development project, create a new file that will host the functionality and simply implement the following code: const FCM = require('fcm-push'); Congratulations! We're typically done. In subsequent recipes, we'll see how to manage our way through the sending process and how we can successfully send our user's web push notification. Implementing service workers Service workers were, in fact, the missing piece in the web scenes of the past. They are what give us the feel of reactivity to any state that a web application, after integrating, can have from, for example, a notification message, an offline-state (no internet connection) and more. In this recipe, we'll see how we can integrate service worker into our application. Service workers files are event-driven, so everything that will happen inside them will be event based. Since it's JavaScript, we can always hook up listeners to any event. To do that, we will want to give a special logic knowing that if you will use this logic, the event will behave in its default way. You can read more about service workers and know how you can incorporate them--to achieve numerous awesome features to your application that our book won't cover--from https://developers.google.com/web/fundamentals/getting-started/primers/service-workers. How to do it... Services workers will live in the browser, so including them within your frontend bundle is the most suitable place for them. Keeping that in mind, let's create a file called firebase-messaging-sw.js and manifest.json file. The JavaScript file will be our service worker file and will host all major workload, and the JSON file will simply be a metadata config file. After that, ensure that you also create app.js file, where this file will be the starting point for our Authorization and custom UX.  We will look into the importance of each file individually later on, but for now, go back to the firebase-messaging-sw.js file and write the following: //[*] Importing Firebase Needed Dependencies importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-app.js'); importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-messaging.js'); // [*] Firebase Configurations var config = { apiKey: "", authDomain: "", databaseURL: "", storageBucket: "", messagingSenderId: "" }; //[*] Initializing our Firebase Application. firebase.initializeApp(config); // [*] Initializing the Firebase Messaging Object. const messaging = firebase.messaging(); // [*] SW Install State Event. self.addEventListener('install', function(event) { console.log("Install Step, let's cache some files =D"); }); // [*] SW Activate State Event. self.addEventListener('activate', function(event) { console.log('Activated!', event);}); Within any service worker file, the install event is always fired first. Within this event, we can handle and add custom logic to any event we want. This can range from saving a local copy of our application in the browser cache to practically anything we want. Inside your metadata file, which will be the manifest.json files, write the following line: { "name": "Firebase Cookbook", "gcm_sender_id": "103953800507" } How it works... For this to work, we're doing the following: Importing using importScripts while considering this as the script tag with the src attribute in HTML, the firebase app, and the messaging libraries. Then, we're introducing our Firebase config object; we've already discussed where you can grab that object content in the past chapters. Initializing our Firebase app with our config file. Creating a new reference from the firebase.messaging library--always remember that everything in Firebase starts with a reference. We're listening to the install and activate events and printing some stdout friendly message to the browser debugger. Also, within our manifest.json file, we're adding the following metadata: The application name (optional). The gcm_sender_id with the given value. Keep in mind that this value will not change for any new project you have or will create in the future. The gcm_sender_id added line might get deprecated in the future, so keep an eye on that. Implementing sending/receiving registration using Socket.IO By now, we've integrated our FCM server and made our service worker ready to host our awesome custom logic. Like we mentioned, we're about to send web push notifications to our users to expand and enlarge their experience without application. Lately, web push notification is being considered as an engagement feature that any cool application nowadays, ranging from Facebook and Twitter to numerous e-commerce sites, is making good use of. So in the first approach, let's see how we can make it happen with Socket.IO. In order to make the FCM server aware of any client--basically, a browser--Browser OEM has what we call a registration_id. This token is a unique token for every browser that will represent our clients by their browsers and needs to be sent to the FCM server. Each browser generates its own registration_id token. So if your user uses, for instance, chrome for their first interaction with the server and they used firefox for their second experience, the web push notification message won't be sent, and they need to send another token so that they can be notified. How to do it... Now, go back to your NodeJS project that was created in the first recipe. Let's download the node.js socket.io dependency: ~> npm install express socket.io --save Socket.io is also event-based and lets us create our custom event for everything we have, plus some native one for connections. Also, we've installed ExpressJS for configuration sake in order to create a socket.io server. Now after doing that, we need to configure our socket.io server using express. In this case, do as shown in the following code: const express = require('express'); const app = express(); app.io = require('socket.io')(); // [*] Configuring our static files. app.use(express.static('public/')); // [*] Configuring Routes. app.get('/', (req, res) => { res.sendFile(__dirname + '/public/index.html'); }); // [*] Configuring our Socket Connection. app.io.on('connection', socket => { console.log('Huston ! we have a new connection ...'); }) So let's discuss the preceding code, where we are simply doing the following: Importing express and creating a new express app. Using the power of object on the fly, we've included the socket.io package over a new sub-object over the express application. This integration makes our express application now support the usage of  socket.io over the Application. We're indicating that we want to use the public folder as our static files folder, which will host our HTML/CSS/Javascript/IMG resources. We're listening to the connection, which will be fired once we have a new upcoming client. Once a new connection is up, we're printing a friendly message over our console. Let's configure our frontend. Head directly to your index.html files located in the public folder, and add the following line at the head of your page: <script src="/socket.io/socket.io.js"></script> The socket.io.js file will be served in application launch time, so don't worry much if you don't have one locally. Then, at the bottom of our index.html file before the closing of our <body> tag, add the following: <script> var socket = io.connect('localhost:3000'); </script> In the preceding script, we're connecting our frontend to our socket.io backend. Supposedly, our server is in port 3000; this way, we ensured that our two applications are running in sync. Head to your app.js files--created in the earlier recipes; create and import it if you didn't do so already--and introduce the following: //[*] Importing Firebase Needed Dependencies importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-app.js'); importScripts('https://www.gstatic.com/firebasejs/ 3.5.2/firebase-messaging.js'); // [*] Firebase Configurations var config = { apiKey: "", authDomain: "", databaseURL: "", storageBucket: "", messagingSenderId: "" }; //[*] Initializing our Firebase Application. firebase.initializeApp(config); // [*] Initializing the Firebase Messaging Object. const messaging = firebase.messaging(); Everything is great; don't forget to import the app.js file within your index.html file. We will now see how we can grab our registration_id token next: As I explained earlier, the registration token is unique per browser. However, in order to get it, you should know that this token is considered a privacy issue. To ensure that they do not fall into the wrong hands, it's not available everywhere and to get it, you need to ask for the Browser User Permission. Since you can use it in any particular application, let's see how we can do that. The registration_id token will be considered as security and privacy threat to the user in case this token has been compromised, because once the attacker or the hacker gets the tokens, they can send or spam users with push messages that might have a malicious intent within, so safely keeping and saving those tokens is a priority. Now, within your app.js files that we created early on, let's add the following lines of code underneath the Firebase messaging reference: messaging.requestPermission() .then(() => { console.log("We have permission !"); return messaging.getToken(); }) .then((token) => { console.log(token); socket.emit("new_user", token); }) .catch(function(err) { console.log("Huston we have a problem !",err); }); We've sent out token using the awesome feature of socket.io. In order to get it now, let's simply listen to the same event and hope that we will get some data over our NodeJS backend. We will now proceed to learn about receiving registration token: Back to the app.js file, inside our connection event, let's add the following code: socket.on('new_user', (endpoint) => { console.log(endpoint); //TODO : Add endpoint aka.registration_token, to secure place. }); Our socket.io logic will look pretty much as shown in the following code block: // [*] Configuring our Socket Connection. app.io.on('connection', socket => { console.log('Huston ! we have a new connection ...'); socket.on('new_user', (endpoint) => { console.log(endpoint); //TODO : Add endpoint aka.registration_token, to secure place. }); }); Now, we have our bidirectional connection set. Let's grab that regitration_token and save it somewhere safe for further use. How it works... In the first part of the recipe, we did the following: Importing using importScripts considering it a script tag with src attribute in HTML, the firebase app, and messaging libraries. Then, we're introducing our Firebase Config object. We've already discussed where you can grab that object content in the previous chapters. Initializing our Firebase app with our config file. Creating a new reference from firebase.messaging library--always remember that everything in Firebase starts with a reference. Let's discuss what we did previously in the section where we talked about getting the registration_token value: We're using the Firebase messaging reference that we created earlier and executing the requestPermission() function, which returns a promise. Next--assuming that you're following along with this recipe--you will do the following over your page. After launching the development server, you will get a request from your page (Figure 1): Now, let's get back to the code. If we allow the notification, the promise resolver will be executed and then we will return the registration_token value from themessaging.getToken(). Then, we're sending that token over a socket and emitting that with an even name of new_user. Socket.io uses web sockets and like we explained before, sockets are event based. So in order to have a two-way connection between nodes, we need to emit, that is, send an event after giving it a name and listening to the same event with that name. Remember the socket.io event's name because we'll incorporate that into our further recipes. Implementing sending/receiving registration using post requests In a different approach from the one used in Socket.io, let's explore the other way around using post requests. This means that we'll use a REST API that will handle all that for us. At the same time, it will handle saving the registration_token value in a secure place as well. So let's see how we can configure that. How to do it... First, let's start writing our REST API. We will do that by creating an express post endpoint. This endpoint will be porting our data to the server, but before that, let's install some dependencies using the following line: ~> npm install express body-parser --save Let's discuss what we just did: We're using npm to download ExpressJS locally to our development directory. Also, we're downloading body-parser. This is an ExpressJS middleware that will host all post requests data underneath the body subobject. This module is quite a common package in the NodeJS community, and you will find it pretty much everywhere. Now, let's configure our application. Head to the app.js file and add the following code lines there: const express = require('express'); const app = express(); const bodyParser = require('body-parser'); // [*] Configuring Body Parser. app.use(bodyParser.json()); // [*] Configuring Routes. app.post('/regtoken', (req, res) => { let reg_token =req.body.regtoken; console.log(reg_token); //TODO : Create magic while saving this token in secure place. }); In the preceding code, we're doing the following: Importing our dependencies including ExpressJS and BodyParser. In the second step, we're registering the body parser middleware. You can read more about body parser and how to properly configure it to suit your needs from https://github.com/expressjs/body-parser. Next, we're creating an express endpoint or a route. This route will host our custom logic to manage the retrieval of the registration token sent from our users. Now, let's see how we can send the registration token to form our user's side. In this step, you're free to use any HTTP client you seek. However, in order to keep things stable, we'll use the browser native fetch APIs. Since we've managed to fully configure our routes, it can host the functionality we want. Let's see how we can get the registration_token value and send it to our server using post request and the native browser HTTP client named fetch: messaging.requestPermission() .then(() => { console.log("We have permission !"); return messaging.getToken(); }) .then((token) => { console.log(token); //[*] Sending the token fetch("http://localhost:3000/regtoken", { method: "POST" }).then((resp) => { //[*] Handle Server Response. }) .catch(function(err) { //[*] Handle Server Error. }) }) .catch(function(err) { console.log("Huston we have a problem !", err); }); How it works... Let's discuss what we just wrote in the preceding code: We're using the Firebase messaging reference that we created earlier and executing the requestPermission() function that returns a promise. Next, supposing that you're following along with this recipe, after launching the development server you will get the following authorization request from your page: Moving back to the code, if we allow the notification, the promise resolver will be executed and then we will return the registration_token value from the messaging.getToken(). Next, we're using the Fetch API given it a URL as a first parameter and a method name that is post and handling the response and error. Since we saw the different approaches to exchange data between the client and the server, in the next recipe, we will see how we can receive web push notification messages from the server. Receiving web push notification messages We can definitely say that things are on a good path. We managed to configure our message to the server in the past recipes. Now, let's work on getting the message back from the server in a web push message. This is a proven way to gain more leads and regain old users. It is a sure way to re-engage your users, and success stories do not lie. Facebook, Twitter, and e-commerce websites are the living truth on how a web push message can make a difference in your ecosystem and your application in general. How to do it... Let's see how we can unleash the power of push messages. The API is simple and the way to do has never been easier, so let's how we can do that! Let's write down the following code over our firebase-messaging-sw.js file: // [*] Special object let us handle our Background Push Notifications messaging.setBackgroundMessageHandler(function(payload) { return self.registration.showNotification(payload.data.title, body: payload.data.body); }); Let's explain the preceding code: We're using the already created messaging object created using the Firebase messaging library, and we're calling. the setBackgroundMessageHandler() function. This will mean that we will catch all the messages that we will keep receiving in the background. We're using the service worker object represented in the self-object, and we're calling the showNotification() function and passing it some parameters. The first parameter is the title, and we're grabbing it from the server; we'll see how we can get it in just a second. The second parameter is the body of the message. Now, we've prepared our frontend to received messages. Let's send them from the server, and we will see how we can do that using the following code: var fcm = new FCM('<FCM_CODE>'); var message = { to: data.endpoint, // required fill with device token or topics notification: { title: data.payload.title, body: data.payload.body } }; fcm.send(message) .then(function(response) { console.log("Successfully sent with response: ", response); }) .catch(function(err) { console.log("Something has gone wrong!"); console.error(err); }) }); The most important part is FCM_CODE. You can grab it in the Firebase console by going to the Firebase Project Console and clicking on the Overview tab (Figure 3): Then, go to the CLOUD MESSAGING tab and copy and paste the Server Key in the section (Figure 4): How it works... Now, let's discuss the code that what we just wrote: The preceding code can be put everywhere, which means that you can send push notifications in every part of your application. We're composing our notification message by putting the registration token and the information we want to send. We're using the fcm.send() method in order to send our notification to the wanted users. Congratulations! We're done; now go and test your awesome new functionality! Implementing custom notification messages In the previous recipes, we saw how we can send a normal notification. Let's add some controllers and also learn how to add some pictures to it so that we can prettify it a bit. How to do it... Now write the following code and add it to our earlier code over the messaging.setBackgroundMessageHandler() function. So, the end result will look something like this: // [*] Special object let us handle our Background Push Notifications messaging.setBackgroundMessageHandler(function(payload) { const notificationOptions = { body: payload.data.msg, icon: "images/icon.jpg", actions: [ { action : 'like', title: 'Like', image: '<link-to-like-img>' }, { action : 'dislike', title: 'Dislike', image: '<link-to-like-img>' } ] } self.addEventListener('notificationclick', function(event) { var messageId = event.notification.data; event.notification.close(); if (event.action === 'like') { console.log("Going to like something !"); } else if (event.action === 'dislike') { console.log("Going to dislike something !"); } else { console.log("wh00t !"); } }, false); return self.registration.showNotification( payload.data.title,notificationOptions); }); How it works... Let's discuss what we've done so far: We added the notificationOptions object that hosts some of the required metadata, such as the body of the message and the image. Also, in this case, we're adding actions, which means we'll add custom buttons to our notification message. These will range from title to image, the most important part is the action name. Next, we listened on notificationclick, which will be fired each time one of the actions will be selected. Remember the action field we added early on; it'll be the differentiation point between all actions we might add. Then, we returned the notification and showed it using the showNotification() function. We saw how to build powerful progressive applications using Firebase. If you've enjoyed reading this tutorial, do check out, 'Firebase Cookbook', for creating serverless applications with Firebase Cloud Functions or turning your traditional applications into progressive apps with Service workers. What is a progressive web app? Windows launches progressive web apps… that don’t yet work on mobile Hybrid Mobile apps: What you need to know
Read more
  • 0
  • 0
  • 28801
Modal Close icon
Modal Close icon