In this article by Frank Appel , author of the book Testing with JUnit , we will learn that special care has to be taken when testing a component's functionality under exception-raising conditions. You'll also learn how to use the various capture and verification possibilities and discuss their pros and cons. As robust software design is one of the declared goals of the test-first approach, we're going to see how tests intertwine with the fail fast strategy on selected boundary conditions. Finally, we're going to conclude with an in-depth explanation of working with collaborators under exceptional flow and see how stubbing of exceptional behavior can be achieved. The topics covered in this article are as follows: Testing patterns Treating collaborators
In the following article by Austin Scott , the author of Learning RSLogix 5000 Programming , you will be introduced to the high performance, asynchronous nature of the Logix family of controllers and the requirement for the buffering I/O module data it drives. You will learn various techniques for the buffering I/O module values in RSLogix 5000 and Studio 5000 Logix Designer . You will also learn about the IEC Languages that do not require the input or output module buffering techniques to be applied to them. In order to understand the need for buffering, let's start by exploring the evolution of the modern line of Rockwell Automation Controllers .
In this article by Marco Schwartz , author of the book Intel Galileo Networking Cookbook , we will cover the recipe, Reading pins via a web server .
In this article by Viktor Farcic and Alex Garcia , the authors of the book Test-Driven Java Development , we will go through TDD in a simple procedure of writing tests before the actual implementation. It's an inversion of a traditional approach where testing is performed after the code is written.
In this article by Henry Garner , author of the book Clojure for Data Science , we'll be working with a relatively modest dataset of only 100,000 records. This isn't big data (at 100 MB, it will fit comfortably in the memory of one machine), but it's large enough to demonstrate the common techniques of large-scale data processing. Using Hadoop (the popular framework for distributed computation) as its case study, this article will focus on how to scale algorithms to very large volumes of data through parallelism. Before we get to Hadoop and distributed data processing though, we'll see how some of the same principles that enable Hadoop to be effective at a very large scale can also be applied to data processing on a single machine, by taking advantage of the parallel capacity available in all modern computers.
In this article by Dmitry Anoshin , author of the book SAP Lumira Essentials , Dmitry talks about living in a century of information technology. There are a lot of electronic devices around us which generate lots of data. For example, you can surf the Internet, visit a couple of news portals, order new Nike Air Max shoes from a web store, write a couple of messages to your friend, and chat on Facebook. Your every action produces data. We can multiply that action by the amount of people who have access to the internet or just use a cell phone, and we get really BIG DATA. Of course, you have a question: how big is it? Now, it starts from terabytes or even petabytes . The volume is not the only issue; moreover, we struggle with the variety of data. As a result, it is not enough to analyze only the structured data. We should dive deep in to unstructured data, such as machine data which are generated by various machines.
In this article by Jorge R. Castro , author of Building a Home Security System with Arduino , you will learn to read a technical specification sheet, the ports on the Uno and how they work, the necessary elements for a proof of concept . Finally, we will extend the capabilities of our script in Python and rely on NFC technology. We will cover the following points: ProtoBoards and wiring Signals (digital and analog) Ports Datasheets
In this article by John Horton , the author of Android Game Programming by Example , we will look at developing the home screen UI for our game.
In this article by Akhil Arora and Shrey Mehrotra , authors of the book Learning YARN , we will be discussing how Hadoop was developed as a solution to handle big data in a cost effective and easiest way possible. Hadoop consisted of a storage layer, that is, Hadoop Distributed File System ( HDFS ) and the MapReduce framework for managing resource utilization and job execution on a cluster. With the ability to deliver high performance parallel data analysis and to work with commodity hardware, Hadoop is used for big data analysis and batch processing of historical data through MapReduce programming.
Miroslav Vitula , the author of the book Learning zANTI2 for Android Pentesting , penned this article on Connecting to Open Ports , focusing on cracking passwords and setting up a remote desktop connection. Let's delve into the topics.