Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-arrays
Packt
07 Jun 2016
18 min read
Save for later

Learning JavaScript Data Structures: Arrays

Packt
07 Jun 2016
18 min read
In this article by Loiane Groner, author of the book Learning JavaScript Data Structures and Algorithms, Second Edition, we will learn about arrays. An array is the simplest memory data structure. For this reason, all programming languages have a built-in array datatype. JavaScript also supports arrays natively, even though its first version was released without array support. In this article, we will dive into the array data structure and its capabilities. An array stores values sequentially that are all of the same datatype. Although JavaScript allows us to create arrays with values from different datatypes, we will follow best practices and assume that we cannot do this(most languages do not have this capability). (For more resources related to this topic, see here.) Why should we use arrays? Let's consider that we need to store the average temperature of each month of the year of the city that we live in. We could use something similar to the following to store this information: var averageTempJan = 31.9; var averageTempFeb = 35.3; var averageTempMar = 42.4; var averageTempApr = 52; var averageTempMay = 60.8; However, this is not the best approach. If we store the temperature for only 1 year, we could manage 12 variables. However, what if we need to store the average temperature for more than 1 year? Fortunately, this is why arrays were created, and we can easily represent the same information mentioned earlier as follows: averageTemp[0] = 31.9; averageTemp[1] = 35.3; averageTemp[2] = 42.4; averageTemp[3] = 52; averageTemp[4] = 60.8; We can also represent the averageTemp array graphically: Creating and initializing arrays Declaring, creating, and initializing an array in JavaScript is as simple, as shown by the following: var daysOfWeek = new Array(); //{1} var daysOfWeek = new Array(7); //{2} var daysOfWeek = new Array('Sunday', 'Monday', 'Tuesday', 'Wednes"day', 'Thursday', 'Friday', 'Saturday'); //{3} We can simply declare and instantiate a new array using the keyword new (line {1}). Also, using the keyword new, we can create a new array specifying the length of the array (line {2}). A third option would be passing the array elements directly to its constructor (line {3}). However, using the new keyword is not best practice. If you want to create an array in JavaScript, we can assign empty brackets ([]),as in the following example: var daysOfWeek = []; We can also initialize the array with some elements, as follows: var daysOfWeek = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', "'Thursday', 'Friday', 'Saturday']; If we want to know how many elements are in the array (its size), we can use the length property. The following code will give an output of 7: console.log(daysOfWeek.length); Accessing elements and iterating an array To access a particular position of the array, we can also use brackets, passing the index of the position we would like to access. For example, let's say we want to output all the elements from the daysOfWeek array. To do so, we need to loop the array and print the elements, as follows: for (var i=0; i<daysOfWeek.length; i++){ console.log(daysOfWeek[i]); } Let's take a look at another example. Let's say that we want to find out the first 20 numbers of the Fibonacci sequence. The first two numbers of the Fibonacci sequence are 1 and 2, and each subsequent number is the sum of the previous two numbers: var fibonacci = []; //{1} fibonacci[1] = 1; //{2} fibonacci[2] = 1; //{3} for(var i = 3; i < 20; i++){ fibonacci[i] = fibonacci[i-1] + fibonacci[i-2]; ////{4} } for(var i = 1; i<fibonacci.length; i++){ //{5} console.log(fibonacci[i]); //{6} } So, in line {1}, we declared and created an array. In lines {2} and {3}, we assigned the first two numbers of the Fibonacci sequence to the second and third positions of the array (in JavaScript, the first position of the array is always referenced by 0, and as there is no 0 in the Fibonacci sequence, we will skip it). Then, all we have to do is create the third to the twentieth number of the sequence (as we know the first two numbers already). To do so, we can use a loop and assign the sum of the previous two positions of the array to the current position (line {4},starting from index 3 of the array to the 19th index). Then, to take a look at the output (line {6}), we just need to loop the array from its first position to its length (line {5}). We can use console.log to output each index of the array (lines {5} and {6}), or we can also use console.log(fibonacci) to output the array itself. Most browsers have a nice array representation in console.log. If you would like to generate more than 20 numbers of the Fibonacci sequence, just change the number 20 to whatever number you like. Adding elements Adding and removing elements from an array is not that difficult; however, it can be tricky. For the examples we will use in this section, let's consider that we have the following numbers array initialized with numbers from 0 to 9: var numbers = [0,1,2,3,4,5,6,7,8,9]; If we want to add a new element to this array (for example, the number 10), all we have to do is reference the latest free position of the array and assign a value to it: numbers[numbers.length] = 10; In JavaScript, an array is a mutable object. We can easily add new elements to it. The object will grow dynamically as we add new elements to it. In many other languages, such as C and Java, we need to determine the size of the array, and if we need to add more elements to the array, we need to create a completely new array; we cannot simply add new elements to it as we need them. Using the push method However, there is also a method called push that allows us to add new elements to the end of the array. We can add as many elements as we want as arguments to the push method: numbers.push(11); numbers.push(12, 13); The output of the numbers array will be the numbers from 0 to 13. Inserting an element in the first position Now, let's say we need to add a new element to the array and would like to insert it in the first position, not the last one. To do so, first, we need to free the first position by shifting all the elements to the right. We can loop all the elements of the array, starting from the last position + 1 (length) and shifting the previous element to the new position to finally assign the new value we want to the first position (-1). Run the following code for this: for (var i=numbers.length; i>=0; i--){ numbers[i] = numbers[i-1]; } numbers[0] = -1; We can represent this action with the following diagram: Using the unshift method The JavaScript array class also has a method called unshift, which inserts the values passed in the method's arguments at the start of the array: numbers.unshift(-2); numbers.unshift(-4, -3); So, using the unshift method, we can add the value -2 and then -3 and -4 to the beginning of the numbers array. The output of this array will be the numbers from -4 to 13. Removing elements So far, you have learned how to add values to the end and at the beginning of an array. Let's take a look at how we can remove a value from an array. To remove a value from the end of an array, we can use the pop method: numbers.pop(); The push and pop methods allow an array to emulate a basic stack data structure. The output of our array will be the numbers from -4 to 12. The length of our array is 17. Removing an element from first position To remove a value from the beginning of the array, we can use the following code: for (var i=0; i<numbers.length; i++){ numbers[i] = numbers[i+1]; } We can represent the previous code using the following diagram: We shifted all the elements one position to the left. However, the length of the array is still the same (17), meaning we still have an extra element in our array (with an undefined value).The last time the code inside the loop was executed, i+1was a reference to a position that does not exist. In some languages such as Java, C/C++, or C#, the code would throw an exception, and we would have to end our loop at numbers.length -1. As you can note, we have only overwritten the array's original values, and we did not really remove the value (as the length of the array is still the same and we have this extra undefined element). Using the shift method To actually remove an element from the beginning of the array, we can use the shift method, as follows: numbers.shift(); So, if we consider that our array has the value -4 to 12 and a length of 17, after we execute the previous code, the array will contain the values -3 to 12 and have a length of 16. The shift and unshift methods allow an array to emulate a basic queue data structure. Adding and removing elements from a specific position So far, you have learned how to add elements at the end and at the beginning of an array, and you have also learned how to remove elements from the beginning and end of an array. What if we also want to add or remove elements from any particular position of our array? How can we do this? We can use the splice method to remove an element from an array by simply specifying the position/index that we would like to delete from and how many elements we would like to remove, as follows: numbers.splice(5,3); This code will remove three elements, starting from index 5 of our array. This means the numbers [5],numbers [6], and numbers [7] will be removed from the numbers array. The content of our array will be -3, -2, -1, 0, 1, 5, 6, 7, 8, 9, 10, 11, and 12 (as the numbers 2, 3, and 4 have been removed). As with JavaScript arrays and objects, we can also use the delete operator to remove an element from the array, for example, remove numbers[0]. However, the position 0 of the array will have the value undefined, meaning that it would be the same as doing numbers[0] = undefined. For this reason, we should always use the splice, pop, or shift methods to remove elements. Now, let's say we want to insert numbers 2 to 4 back into the array, starting from the position 5. We can again use the splice method to do this: numbers.splice(5,0,2,3,4); The first argument of the method is the index we want to remove elements from or insert elements into. The second argument is the number of elements we want to remove (in this case, we do not want to remove any, so we will pass the value 0 (zero)). And the third argument (onwards) are the values we would like to insert into the array (the elements 2, 3, and 4). The output will be values from -3 to 12 again. Finally, let's execute the following code: numbers.splice(5,3,2,3,4); The output will be values from -3 to 12. This is because we are removing three elements, starting from the index 5, and we are also adding the elements 2, 3, and 4, starting at index 5. Two-dimensional and multidimensional arrays At the beginning of this article, we used the temperature measurement example. We will now use this example one more time. Let's consider that we need to measure the temperature hourly for a few days. Now that we already know we can use an array to store the temperatures, we can easily write the following code to store the temperatures over two days: var averageTempDay1 = [72,75,79,79,81,81]; var averageTempDay2 = [81,79,75,75,73,72]; However, this is not the best approach; we can write better code! We can use a matrix (two-dimensional array) to store this information, in which each row will represent the day, and each column will represent an hourly measurement of temperature, as follows: var averageTemp = []; averageTemp[0] = [72,75,79,79,81,81]; averageTemp[1] = [81,79,75,75,73,72]; JavaScript only supports one-dimensional arrays; it does not support matrices. However, we can implement matrices or any multidimensional array using an array of arrays, as in the previous code. The same code can also be written as follows: //day 1 averageTemp[0] = []; averageTemp[0][0] = 72; averageTemp[0][1] = 75; averageTemp[0][2] = 79; averageTemp[0][3] = 79; averageTemp[0][4] = 81; averageTemp[0][5] = 81; //day 2 averageTemp[1] = []; averageTemp[1][0] = 81; averageTemp[1][1] = 79; averageTemp[1][2] = 75; averageTemp[1][3] = 75; averageTemp[1][4] = 73; averageTemp[1][5] = 72; In the previous code, we specified the value of each day and hour separately. We can also represent this example in a diagram similar to the following: Each row represents a day, and each column represents an hour of the day (temperature). Iterating the elements of two-dimensional arrays If we want to take a look at the output of the matrix, we can create a generic function to log its output: function printMatrix(myMatrix) { for (var i=0; i<myMatrix.length; i++){ for (var j=0; j<myMatrix[i].length; j++){ console.log(myMatrix[i][j]); } } } We need to loop through all the rows and columns. To do this, we need to use a nested for loop in which the variable i represents rows, and j represents the columns. We can call the following code to take a look at the output of the averageTemp matrix: printMatrix(averageTemp); Multi-dimensional arrays We can also work with multidimensional arrays in JavaScript. For example, let's create a 3 x 3 matrix. Each cell contains the sum i (row) + j (column) + z (depth) of the matrix, as follows: var matrix3x3x3 = []; for (var i=0; i<3; i++){ matrix3x3x3[i] = []; for (var j=0; j<3; j++){ matrix3x3x3[i][j] = []; for (var z=0; z<3; z++){ matrix3x3x3[i][j][z] = i+j+z; } } } It does not matter how many dimensions we have in the data structure; we need to loop each dimension to access the cell. We can represent a 3 x 3 x 3 matrix with a cube diagram, as follows: To output the content of this matrix, we can use the following code: for (var i=0; i<matrix3x3x3.length; i++){ for (var j=0; j<matrix3x3x3[i].length; j++){ for (var z=0; z<matrix3x3x3[i][j].length; z++){ console.log(matrix3x3x3[i][j][z]); } } } If we had a 3 x 3 x 3 x 3 matrix, we would have four nested for statements in our code and so on. References for JavaScript array methods Arrays in JavaScript are modified objects, meaning that every array we create has a few methods available to be used. JavaScript arrays are very interesting because they are very powerful and have more capabilities available than primitive arrays in other languages. This means that we do not need to write basic capabilities ourselves, such as adding and removing elements in/from the middle of the data structure. The following is a list of the core available methods in an array object. We have covered some methods already: Method Description concat This joins multiple arrays and returns a copy of the joined arrays every This iterates every element of the array, verifying a desired condition (function) until false is returned filter This creates an array with each element that evaluates to true in the function provided forEach This executes a specific function on each element of the array join This joins all the array elements into a string indexOf This searches the array for specific elements and returns its position lastIndexOf This returns the position of last item in the array that matches the search criteria map This creates a new array from a function that contains the criteria/condition and returns the elements of the array that match the criteria reverse This reverses the array so that the last items become the first and vice versa slice This returns a new array from the specified index some This iterates every element of the array, verifying a desired condition (function) until true is returned sort This sorts the array alphabetically or by the supplied function toString This returns the array as a string valueOf Similar to the toString method, this returns the array as a string We have already covered the push, pop, shift, unshift, and splice methods. Let's take a look at these new ones. Joining multiple arrays Consider a scenario where you have different arrays and you need to join all of them into a single array. We could iterate each array and add each element to the final array. Fortunately, JavaScript already has a method that can do this for us named the concat method, which looks as follows: var zero = 0; var positiveNumbers = [1,2,3]; var negativeNumbers = [-3,-2,-1]; var numbers = negativeNumbers.concat(zero, positiveNumbers); We can pass as many arrays and objects/elements to this array as we desire. The arrays will be concatenated to the specified array in the order that the arguments are passed to the method. In this example, zero will be concatenated to negativeNumbers, and then positiveNumbers will be concatenated to the resulting array. The output of the numbers array will be the values -3, -2, -1, 0, 1, 2, and 3. Iterator functions Sometimes, we need to iterate the elements of an array. You learned that we can use a loop construct to do this, such as the for statement, as we saw in some previous examples. JavaScript also has some built-in iterator methods that we can use with arrays. For the examples of this section, we will need an array and also a function. We will use an array with values from 1 to 15 and also a function that returns true if the number is a multiple of 2 (even) and false otherwise. Run the following code: var isEven = function (x) { // returns true if x is a multiple of 2. console.log(x); return (x % 2 == 0) ? true : false; }; var numbers = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]; return (x % 2 == 0) ? true : false can also be represented as return (x % 2 == 0). Iterating using the every method The first method we will take a look at is the every method. The every method iterates each element of the array until the return of the function is false, as follows: numbers.every(isEven); In this case, our first element of the numbers array is the number 1. 1 is not a multiple of 2 (it is an odd number), so the isEven function will return false, and this will be the only time the function will be executed. Iterating using the some method Next, we have the some method. It has the same behavior as the every method; however, the some method iterates each element of the array until the return of the function is true: numbers.some(isEven); In our case, the first even number of our numbers array is 2 (the second element). The first element that will be iterated is the number 1; it will return false. Then, the second element that will be iterated is the number 2, which will return true, and the iteration will stop. Iterating using forEach If we need the array to be completely iterated no matter what, we can use the forEach function. It has the same result as using a for loop with the function's code inside it, as follows: numbers.forEach(function(x){ console.log((x % 2 == 0)); }); Using map and filter JavaScript also has two other iterator methods that return a new array with a result. The first one is the map method, which is as follows: var myMap = numbers.map(isEven); The myMap array will have the following values: [false, true, false, true, false, true, false, true, false, true, false, true, false, true, false]. It stores the result of the isEven function that was passed to the map method. This way, we can easily know whether a number is even or not. For example, myMap[0] returns false because 1 is not even, and myMap[1] returns true because 2 is even. We also have the filter method. It returns a new array with the elements that the function returned true, as follows: var evenNumbers = numbers.filter(isEven); In our case, the evenNumbers array will contain the elements that are multiples of 2: [2, 4, 6, 8, 10, 12, 14]. Using the reduce method Finally, we have the reduce method. The reduce method also receives a function with the following parameters: previousValue, currentValue, index, and array. We can use this function to return a value that will be added to an accumulator, which will be returned after the reduce method stops being executed. It can be very useful if we want to sum up all the values in an array. Here's an example: numbers.reduce(function(previous, current, index){ return previous + current; }); The output will be 120. The JavaScript Array class also has two other important methods: map and reduce. The method names are self-explanatory, meaning that the map method will map values when given a function, and the reduce method will reduce the array containing the values that match a function as well. These three methods (map, filter, and reduce) are the base of the functional programming of JavaScript. Summary In this article, we covered the most-used data structure: arrays. You learned how to declare, initialize, and assign values as well as add and remove elements. You also learned about two-dimensional and multidimensional arrays as well as the main methods of an array, which will be very useful when we start creating our own algorithms.
Read more
  • 0
  • 0
  • 2277

article-image-practical-big-data-exploration-spark-and-python
Anant Asthana
06 Jun 2016
6 min read
Save for later

Practical Big Data Exploration with Spark and Python

Anant Asthana
06 Jun 2016
6 min read
The reader of this post should be familiar with basic concepts of Spark, such as the shell and RDDs. Data sizes have increased, but our exploration tools and techniques have not evolved as fast. Traditional Hadoop Map Reduce jobs are cumbersome and time consuming to develop. Also, Pig isn't quite as fully featured and easy to work with. Exploration can mean parsing/analyzing raw text documents, analyzing log files, processing tabular data in various formats, and exploring data that may or may not be correctly formatted. This is where a tool like Spark excels. It provides an interactive shell for quick processing, prototyping, exploring, and slicing and dicing data. Spark works with R, Scala, and Python. In conjunction with Jupyter notebooks, we get a clean web interface to write out python, R, or Scala code backed by a Spark cluster. Jupyter notebook is also a great tool for presenting our findings, since we can do inline visualizations and easily share them as a PDF on GitHub or through a web viewer. The power of this set up is that we make Spark do the heavy lifting while still having the flexibility to test code on a small subset of data via the interactive notebooks. Another powerful capability of Spark is its Data Frames API. After we have cleaned our data (dealt with badly formatted rows that can't be loaded correctly), we can load it as a Data Frame. Once the data is a loaded as a Data Frame, we can use the Spark SQL to explore the data. Since notebooks can be shared, this is also a great way to let the developers do the work of cleaning the data and loading it as a Data Frame. Analysts, data scientists, and the likes can then use this data for their tasks. Data Frames can also be exported as Hive tables, which are commonly used in Hadoop-based warehouses. Examples: For this section, we will be using examples that I have uploaded on GitHub. These examples can be found at here. In addition to the examples, there is also a Docker container for running these examples that have been provided. The container runs Spark in a pseudo-distributed mode, and has Jupyter notebook configured with to run Python/PYspark. The basics: To set this up, in your environment, you need a running spark cluster with Jupyter notebook installed. Jupyter notebook, by default, only has the Python kernel configured. You can download additional kernels for Jupyter notebook to run R and Scala. To run Jupyter notebook with Pyspark, use the following command on your cluster: IPYTHON_OPTS="notebook --pylab inline --notebook-dir=<directory sto store notebooks>" MASTER=local[6] ./bin/pyspark When you start Jupyter notebook in the way we mentioned earlier, it initializes a few critical variables. One of them is the Spark Context (sc), which is used to interact with all spark-related tasks. The other is sqlContext, which is the Spark SQL context. This is used to interact with Spark SQL (create Data Frames, run queries, and so on). You need to understand the following: Log Analysis In this example, we use a log file from Apache Server. The code for this example can be found at here. We load our log file in question using: log_file = sc.textFile("../data/log_file.txt") Spark can load files from HDFS, local filesystem, and S3 natively. Other storage formats libraries can be found freely on the Internet, or you could write you own formats (Blog post for another time). The previous command loads the log file. We then use Python’s native shlex library to split the file into different fields and use the Sparks map command to load them as a Row. An RDD consisting of rows can easily be registered as a DataFrame. How we arrived at this solution is where data exploration comes in. We use the Sparks takeSample method to sample the file and get five rows: log_file.takeSample(True, 5) These sample rows are helpful in determining how to parse and load the file. Once we have written our code to load the file, we can apply it to the dataset using map to create a new RDD consisting of Rows to test code on a subset of data in a similar manner using the take or takeSample methods. The take method sequentially reads rows from the file, so although it is faster, it may not be a good representation of the dataset. The take sample method on the other hand randomly picks sample rows from the file; this has a better representation. To create the new RDD and register it as a DataFrame, we use the following code: schema_DF = splits.map(create_schema).toDF() Once we have created the DataFrame and tested it using take/takeSample to make sure that our loading code is working, we can register it as a table using the following: sqlCtx.registerDataFrameAsTable(schema_DF, 'logs') Once it is registered as a table, we can run SQL queries on the log file: sample = sqlCtx.sql('SELECT * FROM logs LIMIT 10').collect() Note that the collect() method collects the result to the driver’s memory so this may not be feasible for large datasets. Use take/takeSample instead to sample data if your dataset is large. The beauty of using Spark with Jupyter is that all this exploration work takes only a few lines of code. It can be written interactively with all the trial and error we needed, the processed data can be easily shared, and running interactive queries on this data is easy. Last but not least, this can easily scale to massive (GB, TB) data sets. k-means on the Iris dataset In this example, we use data from the Iris dataset, which contains measurements of sepal and petal length and width. This is a popular open source dataset used to showcase classification algorithms. In this case, we use Spark’s k-Means algorithm from the MLlib library of Spark. MLlib is Spark’s machine learning library. The code and the output can be found at here. In this example, we are not going to get into too much detail since some of the concepts are outside the scope of this blog post. This example showcases how we load the Iris dataset and create a DataFrame with it. We then train a k-means classifier on this dataset, and then we visualize our classification results. The power of this is that we did a somewhat complex task of parsing a dataset, creating a DataFrame, training a machine learning classifier, and visualizing the data in an interactive and scalable manner. The repository contains several more examples. Feel free to reach out to me if you have any questions. If you would like to see more posts with practical examples, please let us know. About the Author Anant Asthana is a data scientist and principal architect at Pythian, and he can be found on Github at anantasty.
Read more
  • 0
  • 0
  • 6397

article-image-logging-and-monitoring
Packt
02 Jun 2016
17 min read
Save for later

Logging and Monitoring

Packt
02 Jun 2016
17 min read
In this article by Hui-Chuan Chloe Lee, Hideto Saito, and Ke-Jou Carol Hsu, the authors of the book, Kubernetes Cookbook, we will cover the recipe Monitoring master and node. (For more resources related to this topic, see here.) Monitoring master and node Here comes a new level of view for your Kubernetes cluster. In this recipe, we are going to talk about monitoring. Through monitoring tool, users could not only know the resource consumption of workers, the nodes, but also the pods. It will help us to have a better efficiency on resource utilization. Getting ready Before we setup our monitoring cluster in Kubernetes system, there are two main prerequisites: One is to update the last version of binary files, which makes sure your cluster has stable and capable functionality The other one is to setup the DNS server A Kubernetes DNS server can reduce some steps and dependency for installing cluster-like pods. In here, it is easier to deploy a monitoring system in Kubernetes with a DNS server. In Kubernetes, how DNS server gives assistance in large-system deployment? The DNS server can support to resolve the name of Kubernetes service for every container. Therefore, while running a pod, we don't have to set specific IP of service for connecting to other pods. Containers in a pod just need to know the service's name. The daemon of node kubelet assign containers the DNS server by modifying the file /etc/resolv.conf. Try to check the file or use the command nslookup for verification after you have installed the DNS server: # kubectl exec <POD_NAME> [-c <CONTAINER_NAME>] -- cat /etc/resolv.conf // Check where the service "kubernetes" served # kubectl exec <POD_NAME> [-c <CONTAINER_NAME>] -- nslookup kubernetes Update Kubernetes to the latest version: 1.2.1 Updating the version of a running Kubernetes system is not such a trouble duty. You can simply follow the following steps. The procedure is similar to both master and node: Since we are going to upgrade every Kubernetes binary file, stop all of the Kubernetes services before you upgrade. For example, service <KUBERNETES_DAEMON> stop. Download the latest tarball file: version 1.2.1: # cd /tmp && wget https://storage.googleapis.com/kubernetes-release/release/v1.2.1/kubernetes.tar.gz Decompress the file at a permanent directory. We are going to use the add-on templates provided in official source files. These templates can help to create both DNS server and monitoring system: // Open the tarball under /opt # tar -xvf /tmp/kubernetes.tar.gz -C /opt/ // Go further decompression for binary files # cd /opt && tar -xvf /opt/kubernetes/server/kubernetes-server-linux-amd64.tar.gz Copy the new files and overwrite the old ones: # cd /opt/kubernetes/server/bin/ // For master, you should copy following files and confirm to overwrite # cp kubectl hypercube kube-apiserver kube-controller-manager kube-scheduler kube-proxy /usr/local/bin // For nodes, copy the below files # cp kubelet kube-proxy /usr/local/bin Finally, you can now start the system services. It is good to verify the version through the command line: # kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.1", GitCommit:"50809107cd47a1f62da362bccefdd9e6f7076145", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.1", GitCommit:"50809107cd47a1f62da362bccefdd9e6f7076145", GitTreeState:"clean"} As a reminder, you should update both master and node at the same time. Setup DNS server As mentioned, we will use the official template to build up the DNS server in our Kubernetes system. Two steps only. First, modify templates and create the resources. Then, we need to restart the kubelet daemon with DNS information. Start the server by template The add-on files of Kubernetes are located at <KUBERNETES_HOME>/cluster/addons/. According to last step, we can access the add-on files for DNS at /opt/kubernetes/cluster/addons/dns, and two template files are going to be modified and executed. Feel free to depend on the following steps: Copy the file from the format .yaml.in to YAML file, and we will edit the copied ones later: # cp skydns-rc.yaml.in skydns-rc.yaml Input variable Substitute value Example {{ pillar['dns_domain'] }} The domain of this cluster k8s.local {{ pillar['dns_replicas'] }} The number of relica for this replication controller 1 {{ pillar['dns_server'] }} The private IP of DNS server. Must also be in the CIDR of cluster 192.168.0.2   # cp skydns-svc.yaml.in skydns-svc.yaml In this two templates, replace the pillar variable, which is covered by double big parentheses with the items in this table. As you know, the default service kubernetes will occupy the first IP in CIDR. That's why we use IP 192.168.0.2 for our DNS server: # kubectl get svc NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE kubernetes   192.168.0.1   <none>        443/TCP   4d In the template for the replication controller, the file named skydns-rc.yaml, specify master URL in the container kube2sky: # cat skydns-rc.yaml (Ignore above lines) : - name: kube2sky   image: gcr.io/google_containers/kube2sky:1.14   resources:     limits:       cpu: 100m       memory: 200Mi     requests:       cpu: 100m       memory: 50Mi   livenessProbe:     httpGet:       path: /healthz       port: 8080       scheme: HTTP     initialDelaySeconds: 60     timeoutSeconds: 5     successThreshold: 1     failureThreshold: 5   readinessProbe:     httpGet:       path: /readiness       port: 8081       scheme: HTTP     initialDelaySeconds: 30     timeoutSeconds: 5   args:   # command = "/kube2sky"   - --domain=k8s.local   - --kube-master-url=<MASTER_ENDPOINT_URL>:<EXPOSED_PORT> : (Ignore below lines) After you finish the preceding steps for modification, you just start them using the subcommand create: # kubectl create -f skydns-svc.yaml service "kube-dns" created # kubectl create -f skydns-rc.yaml replicationcontroller "kube-dns-v11" created Enable Kubernetes DNS in kubelet Next, we have to access to each node and add DNS information in the daemon kubelet. The tags we used for cluster DNS are --cluster-dns, which assigns the IP of DNS server, and --cluster-domain, which defines the domain of the Kubernetes services: // For init service daemon # cat /etc/init.d/kubernetes-node (Ignore above lines) : # Start daemon. echo $"Starting kubelet: "         daemon $kubelet_prog                 --api_servers=<MASTER_ENDPOINT_URL>:<EXPOSED_PORT>                 --v=2                 --cluster-dns=192.168.0.2                 --cluster-domain=k8s.local                 --address=0.0.0.0                 --enable_server                 --hostname_override=${hostname}                 > ${logfile}-kubelet.log 2>&1 & : (Ignore below lines) // Or, for systemd service # cat /etc/kubernetes/kubelet (Ignore above lines) : # Add your own! KUBELET_ARGS="--cluster-dns=192.168.0.2 --cluster-domain=k8s.local" Now, it is good for you to restart either service kubernetes-node or just kubelet! And you can enjoy the cluster with the DNS server. How to do it… In this section, we will work on installing a monitoring system and introducing its dashboard. This monitoring system is based on Heapster (https://github.com/kubernetes/heapster), a resource usage collecting and analyzing tool. Heapster communicates with kubelet to get the resource usage of both machine and container. Along with Heapster, we have influxDB (https://influxdata.com) for storage, and Grafana (http://grafana.org) as the frontend dashboard, which visualizes the status of resources in several user-friendly plots. Install monitoring cluster If you have gone through the preceding section about the prerequisite DNS server, you must be very familiar with deploying the system with official add-on templates. Let's go check the directory cluster-monitoring under <KUBERNETES_HOME>/cluster/addons. There are different environments provided for deploying monitoring cluster. We choose influxdb in this recipe for demonstration: # cd /opt/kubernetes/cluster/addons/cluster-monitoring/influxdb && ls grafana-service.yaml      heapster-service.yaml             influxdb-service.yaml heapster-controller.yaml  influxdb-grafana-controller.yaml Under this directory, you can see three templates for services and two for replication controllers. We will retain most of the service templates as the original ones. Because these templates define the network configurations, it is fine to use the default settings but expose Grafana service: # cat heapster-service.yaml apiVersion: v1 kind: Service metadata:   name: monitoring-grafana   namespace: kube-system   labels:     kubernetes.io/cluster-service: "true"     kubernetes.io/name: "Grafana" spec:   type: NodePort   ports:     - port: 80       nodePort: 30000       targetPort: 3000   selector:     k8s-app: influxGrafana As you can find, we expose Grafana service with port 30000. This revision will let us be able to access the dashboard of monitoring from browser. On the other hand, the replication controller of Heapster and the one combining influxDB and Grafana require more additional editing to meet our Kubernetes system: # cat influxdb-grafana-controller.yaml (Ignored above lines) : - image: gcr.io/google_containers/heapster_grafana:v2.6.0-2           name: grafana           env:           resources:             # keep request = limit to keep this container in guaranteed class             limits:               cpu: 100m               memory: 100Mi             requests:               cpu: 100m               memory: 100Mi           env:             # This variable is required to setup templates in Grafana.             - name: INFLUXDB_SERVICE_URL               value: http://monitoring-influxdb.kube-system:8086             - name: GF_AUTH_BASIC_ENABLED               value: "false"             - name: GF_AUTH_ANONYMOUS_ENABLED               value: "true"             - name: GF_AUTH_ANONYMOUS_ORG_ROLE               value: Admin             - name: GF_SERVER_ROOT_URL               value: / : (Ignored below lines) For the container of Grafana, please change some environment variables. The first one is the URL of influxDB service. Since we set up the DNS server, we don't have to specify the particular IP address. But an extra-postfix domain should be added. It is because the service is created in the namespace kube-system. Without adding this postfix domain, DNS server cannot resolve monitoring-influxdb in the default namespace. Furthermore, the Grafana root URL should be changed to a single slash. Instead of the default URL, the root (/) makes Grafana transfer the correct webpage in the current system. In the template of Heapster, we run two Heapster containers in a pod. These two containers use the same image andhave similar settings, but actually, they take to different roles. We just take a look at one of them as an example of modification: # cat heapster-controller.yaml (Ignore above lines) :       containers:         - image: gcr.io/google_containers/heapster:v1.0.2           name: heapster           resources:             limits:               cpu: 100m               memory: 200Mi             requests:               cpu: 100m               memory: 200Mi           command:             - /heapster             - --source=kubernetes:<MASTER_ENDPOINT_URL>:<EXPOSED_PORT>?inClusterConfig=false             - --sink=influxdb:http://monitoring-influxdb.kube-system:8086             - --metric_resolution=60s : (Ignore below lines) At the beginning, remove all double-big-parentheses lines. These lines will cause creation error, since they could not be parsed or considered in the YAML format. Still, there are two input variables that need to be replaced to possible values. Replace {{ metrics_memory }} and {{ eventer_memory }} to 200Mi. The value 200MiB is a guaranteed amount of memory that the container could have. And please change the usage for Kubernetes source. We specify the full access URL and port, and disable ClusterConfig for refraining authentication. Remember to do an adjustment on both the heapster and eventer containers. At last, now you can create these items with simple commands: # kubectl create -f influxdb-service.yaml service "monitoring-influxdb" created # kubectl create -f grafana-service.yaml You have exposed your service on an external port on all nodes in your If you want to expose this service to the external internet, you may need to set up firewall rules for the service port(s) (tcp:30000) to serve traffic. See http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md for more details. service "monitoring-grafana" created # kubectl create -f heapster-service.yaml service "heapster" created # kubectl create -f influxdb-grafana-controller.yaml replicationcontroller "monitoring-influxdb-grafana-v3" created // Because heapster requires the DB server and service to be ready, schedule it as the last one to be created. # kubectl create -f heapster-controller.yaml replicationcontroller "heapster-v1.0.2" created Check your Kubernetes resources at namespace kube-system: # kubectl get svc --namespace=kube-system NAME                  CLUSTER-IP        EXTERNAL-IP   PORT(S)             AGE heapster              192.168.135.85    <none>        80/TCP              12m kube-dns              192.168.0.2       <none>        53/UDP,53/TCP       15h monitoring-grafana    192.168.84.223    nodes         80/TCP              12m monitoring-influxdb   192.168.116.162   <none>        8083/TCP,8086/TCP   13m # kubectl get pod --namespace=kube-system NAME                                   READY     STATUS    RESTARTS   AGE heapster-v1.0.2-r6oc8                  2/2       Running   0          4m kube-dns-v11-k81cm                     4/4       Running   0          15h monitoring-influxdb-grafana-v3-d6pcb   2/2       Running   0          12m Congratulations! Once you have all the pods in a ready state, let's check the monitoring dashboard. Introduce Grafana dashboard At this moment, the Grafana dashboard is available through nodes' endpoints. Please make sure whether node's firewall or security group on AWS have opened port 30000 to your local subnet. Take a look at the dashboard by browser. Type <NODE_ENDPOINT>:30000 in your URL searching bar: In the default setting, we have Cluster and Pods in these two dashboards. Cluster board covers nodes' resource utilization, such as CPU, memory, network transaction, and storage. Pods dashboard has similar plots for each pod and you can go watching deep into each container in a pod: As the previous images show, for example, we can observe the memory utilization of individual containers in the pod kube-dns-v11, which is the cluster of the DNS server. The purple lines in the middle just indicate the limitation we set to the container skydns and kube2sky. Create a new metric to monitor pod There are several metrics for monitoring offered by Heapster (https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md).We are going to show you how to create a customized panel by yourself. Please take the following steps as a reference: Go to the Pods dashboard and click on ADD ROW at the bottom of webpage. A green button will show up on the left-hand side. Choose to add a graph panel: First, give your panel a name. For example, CPU Rate. We would like to create the one showing the rate of CPU utility: Set up the parameters in the query as shown in the following screenshot: FROM: For this parameter input cpu/usage_rate WHERE: For this parameter set type = pod_container AND: Set this parameter with the namespace_name=$namespace, pod_name= $podname value GROUP BY: Enter tag(container_name) for this parameter ALIAS BY: For this parameter input $tag_container_name Good job! You can now save the pod by clicking on the icon at upper bar. Just try to discover more functionality of the Grafana dashboard and the Heapster monitoring tool. You will get more understanding about your system, services, and containers through the information from the monitoring system. Summary This recipe informs you how to monitor your master node and nodes in the Kubernetes system. Kubernetes is a project which keeps moving forward and upgrade at a fast speed. The recommended way for catching up to it is to check out new features on its official website: http://kubernetes.io; also, you can always get new Kubernetes on GitHub: https://github.com/kubernetes/kubernetes/releases. Making your Kubernetes system up to date and learning new features practically is the best method to access the Kubernetes technology continuously. Resources for Article: Further resources on this subject: Setting up a Kubernetes Cluster [article]
Read more
  • 0
  • 0
  • 2524

article-image-identity-and-access-management-solutions-iot
Packt
02 Jun 2016
18 min read
Save for later

Identity and Access-Management Solutions for the IoT

Packt
02 Jun 2016
18 min read
In this article by Drew Van Duren and Brian Russell, the authors of the book, Practical Internet of Things Security, we'll have a look at how establishing a structured identity namespace will significantly help manage the identities of the thousands to millions of devices that will eventually be added to your organization. (For more resources related to this topic, see here.) Establishingnaming conventions and uniqueness requirements Uniqueness is a feature that can be randomized or deterministic (for example, algorithmically sequenced); its only requirement is that there are no others identical to it. The simplest unique identifier is a counter. Each value is assigned and never repeats itself. The other is a static value in concert with a counter, for example, a device manufacturer ID plus a product line ID plus a counter. In many cases, a random value is used in concert with static and counter fields. Nonrepetition is generally not enough from the manufacturer's perspective. Usually, something needs a name that provides some context. To this end, manufacturer-unique fields may be added in a variety of ways unique to the manufacturer or in conformance with an industry convention. Uniqueness may also be fulfilled by using a globally unique identifier (UUID) for which the UUID standard specified in RFC 4122 applies. No matter the mechanism, so long as a device is able to be provisioned, an identifier that is nonrepeating, unique to its manufacturer, use, application, or a hybrid of all these should be acceptable for use in identity management. Beyond the mechanisms, the only thing to be careful about is that the combination of all possible identifiers within a statically specified ID length should not be exhausted prematurely if at all possible. Once a method for assigning uniqueness to your IoT devices is established, the next step is to be able to logically identify the assets within their area of operation in order to support authentication and access-control functions. Naming a device Every time you access a restricted computing resource, your identity is checked to ensure that you are authorized to access that specific resource. There are many ways in whichthis can occur, but the endresult of a successful implementation is that someone who does not have the right credentials is not allowed access. Although the process sounds simple, there are a number of difficult challenges that must be overcome when discussing identity and access management for the constrained and numerous devices that comprise the IoT. One of the first challenges is related to the identity itself. Although identity may seem straightforward to you—your name, for example—that identity must be translated into a piece of information that the computing resource (or access-management system) understands. The identity must also not be duplicated across the information domain. Many computer systems today rely on a username, where each username within a domain is distinct. The username could be something as simple as <lastname_firstname_middleiniital>. In the case of the IoT, understanding what identities, or names, to provision to a device can cause confusion. As discussed, in some systems, devices use unique identifiers such as UUIDs or Electronic Serial Numbers (ESNs). We can see a good illustration by looking at how Amazon's first implementation of its IoT service makes use of the IoT device serial numbers of IoT devices. Amazon IoT includes a thing registry service that allows an administrator to register IoT devices, capturing for each the name of the thing and various attributes of it. The attributes can include data items such as: manufacturer type serial number deployment_date location Note that such attributes can be used in what is called attribute-based access control(ABAC). ABAC access approaches allow access decision policies to be defined not just by the identity of the device but also its properties (attributes). Rich, potentially complex rules can be defined for the needs at hand. The following figure provides a view of the AWS IoT service: Even when identifiers such as UUIDs or ESNs are available for an IoT device, these are generally not sufficient for securing authentication and access-control decisions; an identifier can easily be spoofed without enhancement through cryptographic controls. In these instances, administrators must bind another type of identifier to a device. This binding can be as simple as associating a password with the identifier or, more appropriately, using credentials such as digital certificates. IoT messaging protocols frequently include the ability to transmit a unique identifier. For example, MQTT includes a ClientID field that can transmit a broker-unique client identifier. In the case of MQTT, the ClientID value is used to maintain state within a unique broker-client communication session. Secure bootstrap Nothing is worse for security than an IoT-enabled system or network replete with false identities used in acts of identity theft, loss of private information, spoofing, and general mayhem. However, a difficult task in the identity lifecycle is to establish the initial trust in the device that allows it to bootstrap itself into the system. Among the greatest vulnerabilities to secure identity and access management is insecure bootstrapping. Bootstrapping represents the beginning of the process of provisioning a trusted identity to a device within a given system. Bootstrapping may begin in the manufacturing process (for example, in the foundry manufacturing a chip) and be completed once delivered to the end operator. It may also be completely performed in the hands of the end user or some intermediary (for example, the depot or supplier) once delivered. The most secure bootstrapping methods start in the manufacturing processes and implement discrete security associations throughout the supply chain. They uniquely identify a device through: Unique serial numbers printed on the device. Unique and unalterable identifiers stored and fused in device read-only memory(ROM). Manufacturer-specific cryptographic keys used only through specific lifecycle states to securely handoff the bootstrapping process to follow-on lifecycle states (for example, shipping, distribution, and handoff to an enrollment center). Such keys (frequently delivered outofband) are used for loading subsequent components by specific entities responsible for preparing the device. PKIs are often used to aid in the bootstrap process. Bootstrapping from a PKI perspective should generally involve the following processes: Devices shouldbe securely shipped from the manufacturer (via secureshipping servicescapable of tamperdetection) to a trusted facility or depot. The facility should have robust physical security access controls, record-keeping, and audit processes in addition to highly vetted staff. Device counts and batches should bematched against the shipping manifest. Once they have been received, the steps for each device include: Authenticating the device uniquely,using a customer-specific, default manufacturer authenticator (password or key). Installing PKI trust anchors and any intermediate public key certificates (for example, those of the registration authority, enrollment certificate authority, or other roots). Installing minimal network reachability information such that the device knows where to check certificate revocation lists, perform OCSP lookups, or perform other security-related functions. Provisioning the device PKI credentials (public key signed by CA) and private key(s) such that other entities possessing the signing CA keys can trust the new device. A secure bootstrapping process may not be identical to that described in the preceding list but should be one that mitigates the following types of threats and vulnerabilities when provisioning devices: Insider threats designed to introduce new, rogue, or compromised devices (whichshould not be trusted) Duplication (cloning) of devices no matter where in the lifecycle Introduction of public key trust anchors or other key material into a device that should notbe trusted (rogue trust anchors and other keys) Compromising (including replication) of a new IoT device's private keys during key generation or import into the device Gaps in device possession during the supply chain and enrollment processes Protection of the device when re-keying and assigning new identification material needed for normal use (re-bootstrapping as needed) Given the security-critical features of smart chip cards and their use in sensitive financial operations, the smartcard industry adopted rigid enrollment process controls not unlike those described above. Without them, severe attacks would have the potential of crippling the financial industry. Granted, many consumer-level IoT devices are unlikely to have secure bootstrap processes, but over time,the authorsbelieve that this will change, depending on the deployment environment and the stakeholders' appreciation of the threats. The more connected devices become, the more their potential to do harm. In practice, secure bootstrapping processes need to be tailored to the threat environment of the particular IoT device, its capabilities, and the network environment in question. The greater the potential risks, the more strict and thorough the bootstrapping process needs to be. Credential and attribute provisioning Once the foundation for identities within the device is laid, the provisioning of operational credentials and attributes can occur. These are the credentials that will be used within an IoT system for secure communications, authentication, and integrity protections. The authorsstrongly recommend using certificates for authentication and authorization whenever possible. If using certificates, an important and security-relevant consideration is whether to generate the key pairs on the device itself or centrally. Some IoT services allow the central (for example, by a key server) generation of public/private key pairs. While this can be an efficient method of bulk-provisioning thousands of devices with credentials, care should be taken to address potential vulnerabilities the process may expose (that is, the sending of sensitive, private key material through intermediary devices/systems). If centralized generation is used, it should make use of a strongly secured key-management system operated by vetted personnel in secured facilities. Another means of provisioning certificates is through local generation of the key pairs (directly on the IoT device) followed by transmission of the public key certificate through a certificate-signing request to the PKI. Absent well-secured bootstrapping procedures, additional policy controls will have to be established for the PKI's registration authority(RA) in order to verify the identity of the device being provisioned. In general, the more secure the bootstrapping process, the more automated the provisioning can be. The following is a sequence diagram that depicts an overall registration, enrollment, and provisioning flow for an IoT device: Local access There are times when local access to the device is required for administration purposes. This may require the provisioning of SSH keys or administrative passwords. In the past, organizations frequently made the mistake of sharing administrative passwords to allow easeofaccess to devices. This is not a recommended approach, although implementing a federated access solution for administrators can be daunting. This is especially true when devices are spread across wide geographic distances, such various sensors, gateways, and other unattended devices in the transportation industry. Account monitoring and control After accounts and credentials have been provisioned, accounts must continue to be monitored against defined security policies. It is also important that organizations monitor the strength of the credentials (that is, cryptographic cipher suites and key lengths) provisioned to IoT devices across their infrastructure. It is highly likely that pockets of teams will provision IoT subsystems on their own; therefore, defining, communicating, and monitoring the required security controls to apply to those systems is vital. Another aspect of monitoring relates to tracking the use of accounts and credentials. Assign someone to audit local IoT device administrative credential (passwords and SSH keys) use on a routine basis. Also, strongly consider whether privileged account-management tools can be applied to your IoT deployment. These tools havefeatures such as checking out administrative passwords to aid in audit processes. Account updates Credentials must be rotated on a regular basis; this is true for certificates and keys as well as passwords. Logistical impediments have historically hampered IT organizations' willingness to shorten certificate lifetimes and manage increasing numbers of credentials. There is a tradeoff to consider as short-lived credentials have a reduced attack footprint, yet the process of changing them tends to be expensive and time consuming. Whenever possible, look for automated solutions these processes. Services such as Let's Encrypt (https://letsencrypt.org/) are gaining popularity inhelping improve and simplify certificate-management practices for organizations. Let's Encrypt provides PKI services along with an extremely easy-to-use plugin-based client that supports various platforms. Account suspension Just as with user accounts, do not automatically delete IoT device accounts. Consider maintaining those accounts in a suspended state in case data tied to the accounts is required for forensic analysis at a later time. Account / credential deactivation/ deletion Deleting accounts used by IoT devices and the services they interact with will help combat the ability of an adversary to use those accounts to gain access after the devices have been decommissioned. Keys used for encryption (whether network or application) should also be deleted to keep adversaries from decrypting captured data later using those recovered keys. Authentication credentials IoT messaging protocols often support the ability to use different types of credentials for authentication with external services and other IoT devices. This section examines the typical options available for these functions. Passwords Some protocols, such as MQTT, only provide the ability to use a username/password combination for nativeprotocol authentication purposes. Within MQTT, the CONNECT message includes the fields for passing this information to an MQTT broker. In the MQTT version 3.1.1 specification defined by OASIS, you can see these fields within the CONNECT message (http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html): Note that there are no protections applied to support the confidentiality of the username/password in transit by the MQTT protocol. Instead, implementers should consider using the Transport Layer Security (TLS) protocol to provide cryptographic protections. There are numerous security considerations related to using a username/password-based approach for IoT devices. Some of these concerns include: Difficulty in managing large numbers of device usernames and passwords Difficulty securing the passwords stored on the devices themselves Difficulty managing passwords throughout the device lifecycle Though not ideal, if you do plan on implementing a username/password system for IoT device authentication, consider taking these precautions: Create policies and procedures torotate passwords at least every 30 days for each device. Better yet, implement a technical control wherein the management interface automatically prompts you when password rotation is needed. Establish controls for monitoring device account activity. Establish controls for privileged accounts that support administrative access to IoT devices. Segregate the password-protected IoT devices into less-trusted networks. Symmetric keys Symmetric key material may also be used to authenticate. Message authentication codes (MACs) are generated using a MAC algorithm (such as HMAC and CMAC) with a shared key and known data (signed by the key). On the receiving side, an entity can prove that the sender possessed the preshared key when its computed MAC is shown to be identical to the received MAC. Unlike a password, symmetric keys do not require the key to be sent between the parties (except ahead of time or agreed on using a key-establishment protocol) at the time of the authentication event. The keys will either need to be established using a public key algorithm, input out of band, or sent to the devices ahead of time, encrypted using key encryption keys (KEKs). Certificates Digital certificates, based on public keys, are the preferred method of providing authentication functionality in the IoT. Although some implementations today may not support the processing capabilities needed to use certificates, Moore's law for computational power and storage is fast changing this. X.509 Certificates come with a highly organized hierarchical naming structure that consists of organizations, organizational units, and distinguished names(DNs) or common names(CNs). Referencing AWS support for provisioning X.509 certificates, we can see that AWS allowsone-click generation of a device certificate. In the following example, we generate a device certificate with a generic IoT device common name and a lifetime of 33 years. The one-click generation also (centrally) creates the public/private key pair. If possible, it is recommended that you generate your certificates locally by generating a key pair on the devices and uploading a CSR to the AWS IoT service. This enables thecustomized tailoring of the certificate policy in order to define the hierarchical units (OU, DN, and so on) that are useful for additional authorization processes. IEEE 1609.2 The IoT is characterized by many use cases involving machine-to-machine communication, and some of them involve communications through the congested wireless spectrum. Take connected vehicles, for instance: an emerging technology wherein your vehicle will possess onboardequipment(OBE) that frequently automatically alerts other drivers in your vicinity to your car's location in the form of basic safety messages(BSM). The automotive industry, the US Department of Transportation (USDOT), and academia have been developing CV technology for many years, and it will make its commercial debut in the 2017 Cadillac. In a few years, it is likely that most new US vehicles will be outfitted with the technology. It will not only enable vehicle-to-vehicle communications but also vehicle-to-infrastructure (V2I) communications to various roadside and backhaul applications. The Dedicated Short Range Communications (DSRC) wireless protocol (based on IEEE 802.11p) is limited to a narrow set of channels in the 5-GHz frequency band. To accommodate so many vehicles and maintain security, it was necessary to secure the communications using cryptography (to reduce malicious spoofing or eavesdropping attacks) and minimize the security overhead within connected vehicles' BSM transmissions. The industry resolved to a new, slimmer, and sleeker digital certificate design: IEEE 802.16. The 1609.2 certificate format is advantageous in that it is approximately halfthe size of a typical X.509 certificate while still using strong, elliptic curve cryptographic algorithms (ECDSA and ECDH). The certificate is also useful for general machine-to-machine communication through its unique attributes, including explicit application identifier (SSID) and credential holder permission (SSP) fields. These attributes can allow IoT applications to make explicit access-control decisions without having to internally or externally query the credential holder's permissions. They're embedded right in the certificate during the secure, integrated bootstrap and enrollment process with the PKI. The reduced size of these credentials also makes them attractive for other, bandwidth-constrained wireless protocols. Biometrics There is work being done in the industry today on new approaches that leverage biometrics for device authentication. The FIDO alliance (www.fidoalliance.org) has developed specifications that define the use of biometrics for both a passwordless experience as well as for use as a second authentication factor. Authentication can include a range of flexible biometric types, from fingerprints to voiceprints. Biometric authentication is being added to some commercial IoT devices (for example, consumer door locks) already, and there is interesting potential in leveraging biometrics as a second factor of authentication for IoT systems. As an example, voiceprints can be used to enable authentication across a set of distributed IoT devices such as roadside equipment(RSE) in the transportation sector. This would allow an RSE tech to access the device through a cloud connection to the backend authentication server. Companies like Hypr Biometric Security (https://www.hypr.com/) are leading the way toward using this technology to reduce the need for passwords and enable more robust authentication techniques. New work in authorization for the IoT Progress toward using tokens with resource-constrained IoT devices has not fully matured; however, there are organizations working on defining the use of protocols such as OAUTH 2.0 for the IoT. One such group is the Internet Engineering Task Force (IETF), through the Authentication and Authorization for Constrained Environments (ACE) effort. ACE has specified RFC 7744,Use Cases for Authentication and Authorization in Constrained Environments (https://datatracker.ietf.org/doc/rfc7744/). The RFC use cases are primarily based on IoT devices that employ CoAP as the messaging protocol. The document provides a useful set of use cases that clarify the need for a comprehensive IoT authentication and authorization strategy. RFC 7744 provides valuable considerations the for authentication and authorization of IoT devices, including these: Devices may host several resources, wherein each requires its own access-control policy. A single device may have different access rights for different requesting entities. Policy decision points must be able to evaluate the context of a transaction. This includes the potential for understanding that a transaction is occurring during an emergency situation. The ability to dynamically control authorization policies is critical to supporting the dynamic environment of the IoT. IoT IAM infrastructure Now that we have addressed many of the enablers of identity and access management, it is important to elaborate on how solutions are realized in infrastructures. This section is primarily devoted to public key infrastructures(PKIs) and their utility in securing IAM deployments for the IoT. 802.1x 802.1x authentication mechanisms can be employed to limit IP-based IoT device access to a network. Note though that not all IoT devices rely on the provisioning of an IP address. While it cannot accommodate all IoT device types, implementing 802.1x is a component of a good access-control strategy that addresses many use cases. Enabling 802.1x authentication requires an access device and an authentication server. The access device is typically an access point and the authentication server can take the form of a RADIUS or authentication, authorization, and accounting(AAA) server. Summary This articleprovided an introduction to the infrastructure components required for provisioning authentication credentials, with a heavy focus on PKI. A look at different types of authentication credentials was given and a new approaches to providing authorization and access control for IoT devices were also discussed. Resources for Article: Further resources on this subject: Internet of Things with BeagleBone [article] The Internet of Things [article] Internet of Things with Xively [article]
Read more
  • 0
  • 0
  • 6985

article-image-linking-data-shapes
Packt
01 Jun 2016
7 min read
Save for later

Linking Data to Shapes

Packt
01 Jun 2016
7 min read
In this article by David J Parker, the author of the book Mastering Data Visualization with Microsoft Visio Professional 2016, discusses about that Microsoft introduced the current data-linking feature in the Professional edition of Visio Professional 2007. This feature is better than the database add-on that has been around since Visio 4 because it has greater importing capabilities and is part of the core product with its own API. This provides the Visio user with a simple method of surfacing data from a variety of data sources, and it gives the power user (or developer) the ability to create productivity enhancements in code. (For more resources related to this topic, see here.) Once data is imported in Visio, the rows of data can be linked to shapes and then displayed visually, or be used to automatically create hyperlinks. Moreover, if the data is edited outside of Visio, then the data in the Visio shapes can be refreshed, so the shapes reflect the updated data. This can be done in the Visio client, but some data sources can also refresh the data in Visio documents that are displayed in SharePoint web pages. In this way, Visio documents truly become operational intelligence dashboards. Some VBA knowledge will be useful, and the sample data sources are introduced in each section. In this chapter, we shall cover the following topics: The new Quick Import feature Importing data from a variety of sources How to link shapes to rows of data Using code for more linking possibilities A very quick introduction to importing and linking data Visio Professional 2016 added more buttons to the Data ribbon tab, and some new Data Graphics, but the functionality has basically been the same since Visio 2007 Professional. The new additions, as seen in the following screenshot, can make this particular ribbon tab quite wide on the screen. Thank goodness that wide screens have become the norm: The process to create data-refreshable shapes in Visio is simply as follows: Import data as recordsets. Link rows of data to shapes. Make the shapes display the data. Use any hyperlinks that have been created automatically. The Quick Import tool introduced in Visio 2016 Professional attempts to merge the first three steps into one, but it rarely gets it perfectly, and it is only for simple Excel data sources. Therefore, it is necessary to learn how to use the Custom Import feature properly. Knowing when to use the Quick Import tool The Data | External Data | Quick Import button is new in Visio 2016 Professional. It is part of the Visio API, so it cannot be called in code. This is not a great problem because it is only a wrapper for some of the actions that can be done in code anyway. This feature can only use an Excel workbook, but fortunately Visio installs a sample OrgData.xls file in the Visio Content<LCID> folder. The LCID (Location Code Identifier) for US English is 1033, as shown in the following screenshot: The screenshot shows a Visio Professional 2016 32-bit installation is on a Windows 10 64-bit laptop. Therefore, the Office16 applications are installed in the Program Files (x86)root folder. It would just be Program Filesroot if the 64-bit version of Office was installed. It is not possible to install a different bit version of Visio than the rest of the Office applications. There is no root folder in previous versions of Office, but the rest of the path is the same. The full path on this laptop is C:Program Files (x86)Microsoft OfficerootOffice16Visio Content1033ORGDATA.XLS, but it is best to copy this file to a folder where it can be edited. It is surprising that the Excel workbook is in the old binary format, but it is a simple process to open and save it in the new Open Packaging Convention file format with an xlsx extension. Importing to shapes without existing Shape Data rows The following example contains three Person shapes from the Work Flow Objects stencil, and each one contains the names of a person’s name, spelt exactly the same as in the key column on the Excel worksheet. It is not case sensitive, and it does not matter whether there are leading or trailing spaces in the text. When the Quick Import button is pressed, a dialog opens up to show the progress of the stages that the wizard feature is going through, as shown in the following screenshot: If the workbook contains more than one table of data, the user is prompted to select the range of cells within the workbook. When the process is complete, each of the Person shapes contains all of the data from the row in the External Data recordset, where the text matches the Name column, as shown in the following screenshot: The linked rows in the External Data window also display a chain icon, and the right-click menu has many actions, such as selecting the Linked Shapes for a row. Conversely, each shape now contains a right-mouse menu action to select the linked row in an External Data recordset. The Quick Import feature also adds some default data graphics to each shape, which will be ignored in this chapter because it is explored in detail in chapter 4, Using the Built-in Data Graphics. Note that the recordset in the External Data window is named Sheet1$A1:H52. This is not perfect, but the user can rename it through the right mouse menu actions of the tab. The Properties dialog, as seen in the following screenshot: The user can also choose what to do if a data link is added to a shape that already has one. A shape can be linked to a single row in multiple recordsets, and a single row can be linked to multiple shapes in a document, or even on the same page. However, a shape cannot be linked to more than one row in the same recordset. Importing to shapes with existing Shape Data rows The Person shape from the Resources stencil has been used in the following example, and as earlier, each shape has the name text. However, in this case, there are some existing Shape Data rows: When the Quick Import feature is run, the data is linked to each shape where the text matches the Name column value. This feature has unfortunately created a problem this time because the Phone Number, E-mail Alias, and Manager Shape Data rows have remained empty, but the superfluous Telephone, E-mail, and Reports_To have been added. The solution is to edit the column headers in the worksheet to match the existing Shape Data row labels, as shown in the following screenshot: Then, when Quick Import is used again, the column headers will match the Shape Data row names, and the data will be automatically cached into the correct places, as shown in the following screenshot: Using the Custom Import feature The user has more control using the Custom Import button on the Data | External Data ribbon tab. This button was called Link Data to Shapes in the previous versions of Visio. In either case, the action opens the Data Selector dialog, as shown in the following screenshot: Each of these data sources will be explained in this chapter, along with the two data sources that are not available in the UI (namely XML files and SQL Server Stored Procedures). Summary This article has gone through the many different sources for importing data in Visio and has shown how each can be done. Resources for Article: Further resources on this subject: Overview of Process Management in Microsoft Visio 2013[article] Data Visualization[article] Data visualization[article]
Read more
  • 0
  • 0
  • 7293

article-image-understanding-uikitfundamentals
Packt
01 Jun 2016
9 min read
Save for later

Understanding UIKitFundamentals

Packt
01 Jun 2016
9 min read
In this article by Jak Tiano, author of the book Learning Xcode, we're mostly going to be talking about concepts rather than concrete code examples. Since we've been using UIKit throughout the whole book (and we will continue to do so), I'm going to do my best to elaborate on some things we've already seen and give you new information that you can apply to what we do in the future. (For more resources related to this topic, see here) As we've heard a lot about UIKit. We've seen it at the top of our Swift files in the form of import UIKit. We've used many of the UI elements and classes it provides for us. Now, it's time to take an isolated look at the biggest and most important framework in iOS development. Application management Unlike most other frameworks in the iOS SDK, UIKit is deeply integrated into the way your app runs. That's because UIKit is responsible for some of the most essential functionalities of an app. It also manages your application's window and view architecture, which we'll be talking about next. It also drives the main run loop, which basically means that it is executing your program. The UIDevice class In addition to these very important features, UIKit also gives you access to some other useful information about the device the app is currently running on through the UIDevice class. Using online resources and documentation: Since this article is about exploring frameworks, it is a good time to remind you that you can (and should!) always be searching online for anything and everything. For example, if you search for UIDevice, you'll end up on Apple's developer page for the UIDevice class, where you can see even more bits of information that you can pull from it. As we progress, keep in mind that searching the name of a class or framework will usually give you quick access to the full documentation. Here are some code examples of the information you can access: UIDevice.currentDevice().name UIDevice.currentDevice().model UIDevice.currentDevice().orientation UIDevice.currentDevice().batteryLevel UIDevice.currentDevice().systemVersion Some developers have a little bit of fun with this information: for example, Snapchat gives you a special filter to use for photos when your battery is fully charged.Always keep an open mind about what you can do with data you have access to! Views One of the most important responsibilities of UIKit is that it provides views and the view hierarchy architecture. We've talked before about what a view is within the MVC programming paradigm, but here we're referring to the UIView class that acts as the base for (almost) all of our visual content in iOS programming. While it wasn't too important to know about when just getting our feet wet, now is a good time to really dig in a bit and understand what UIViews are and how they work both on their own and together. Let's start from the beginning: a view (UIView) defines a rectangle on your screen that is responsible for output and input, meaning drawing to the screen and receiving touch events.It can also contain other views, known as subviews, which ultimately create a view hierarchy. As a result of this hierarchy, we have to be aware of the coordinate systems involved. Now, let's talk about each of these three functions: drawing, hierarchies, and coordinate systems. Drawing Each UIView is responsible for drawing itself to the screen. In order to optimize drawing performance, the views will usually try to render their content once and then reuse that image content when it doesn't change. It can even move and scale content around inside of it without needing to redraw, which can be an expensive operation: An overview of how UIView draws itself to the screen With the system provided views, all of this is handled automatically. However, if you ever need to create your own UIView subclass that uses custom drawing, it's important to know what goes on behind the scenes. To implement custom drawing in a view, you need to implement the drawRect() function in your subclass. When something changes in your view, you need to call the setNeedsDisplay() function, which acts as a marker to let the system know that your view needs to be redrawn. During the next drawing cycle, the code in your drawRect() function will be executed to refresh the content of your view, which will then be cached for performance. A code example of this custom drawing functionality is a bit beyond the scope of this article, but discussing this will hopefully give you a better understanding of how drawing works in addition to giving you a jumping off point should you need to do this in the future. Hierarchies Now, let's discuss view hierarchies. When we would use a view controller in a storyboard, we would drag UI elements onto the view controller. However, what we were actually doing is adding a subview to the base view of the view controller. And in fact, that base view was a subview of the UIWindow, which is also a UIView. So, though, we haven't really acknowledged it, we've already put view hierarchies to work many times. The easiest way to think about what happens in a view hierarchy is that you set one view's parent coordinate system relative to another view. By default, you'd be setting a view's coordinate system to be relative to the base view, which is normally just the whole screen. But you can also set the parent coordinate system to some other view so that when you move or transform the parent view, the children views are moved and transformed along with it. Example of how parenting works with a view hierarchy. It's also important to note that the view hierarchy impacts the draw order of your views. All of a view's subviews will be drawn on top of the parent view, and the subviews will be drawn in the order they were added (the last subview added will be on top). To add a subview through code, you can use the addSubview() function. Here's an example: var view1 = UIView() var view2 = UIView() view1.addSubview(view2) The top-most views will intercept a touch first, and if it doesn't respond, it will pass it down the view hierarchy until a view does respond. Coordinate systems With all of this drawing and parenting, we need to take a minute to look at how the coordinate system works in UIKit for our views.The origin (0,0 point) in UIKit is the top left of the screen, and increases along X to the right, and increases on the Y downward. Each view is placed in this upper-left positioning system relative to its parent view's origin. Be careful! Other frameworks in iOS use different coordinate systems. For example, SpriteKit uses the lower-left corner as the origin. Each view also has its own setof positioning information. This is composed of the view's frame, bounds, and center. The frame rectangle describes the origin and the size of view relative to its parent view's coordinate system. The bounds rectangle describes the origin and the size of the view from its local coordinate system. The center is just the center point of the view relative to the parent view. When dealing with so many different coordinate systems, it can seem like a nightmare to compare positions from different views. Luckily, the UIView class provides a simple convertPoint()function to convert points between systems. Try running this little experiment in a playground to see how the point gets converted from one view's coordinate system to the other: import UIKit let view1 = UIView(frame: CGRect(x: 0, y: 0, width: 50, height: 50)) let view2 = UIView(frame: CGRect(x: 10, y: 10, width: 30, height: 30)) view1.addSubview(view2) let pointFrom1 = CGPoint(x: 20, y: 20) let pointFromView2 = view1.convertPoint(pointFrom1, toView: view2) Hopefully, you now have a much better understanding of some of the underlying workings of the view system in UIKit. Documents, displays, printing, and more In this section, I'm going to do my best to introduce you to the many additional features of the UIKit framework. The idea is to give you a better understanding of what is possible with UIKit, and if anything sounds interesting to you, you can go off and explore these features on your own. Documents UIKit has built in support for documents, much like you'd find on a desktop operating system. Using the UIDocument class, UIKit can help you save and load documents in the background in addition to saving them to iCloud. This could be a powerful feature for any app that allows the user to create content that they expect to save and resume working on later. Displays On most new iOS devices, you can connect external screens via HDMI. You can take advantage of these external displays by creating a new instance of the UIWindow class, and associating it with the external display screen. You can then add subviews to that window to create a secondscreen experience for devices like a bigscreen TV. While most consumers don't ever use HDMI-connected external displays, this is a great feature to keep in mind when working on internal applications for corporate or personal use. Printing Using the UIPrintInteractionController, you can set up and send print jobs to AirPrint-enabled printers on the user's network. Before you print, you can also create PDFs by drawing content off screen to make printing easier. And more! There are many more features of UIKit that are just waiting to be explored! To be honest, UIKit seems to be pretty much a dumping ground for any general features that were just a bit too small to deserve their own framework. If you do some digging in Apple's documentation, you'll find all kinds of interesting things you can do with UIKit, such as creating custom keyboards, creating share sheets, and custom cut-copy-paste support. Summary In this article, we looked at the biggest and most important UIKit and learned about some of the most important system processes like the view hierarchy. Resources for Article:   Further resources on this subject: Building Surveys using Xcode [article] Run Xcode Run [article] Tour of Xcode [article]
Read more
  • 0
  • 0
  • 17290
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-understanding-patterns-and-architecturesin-typescript
Packt
01 Jun 2016
19 min read
Save for later

Understanding Patterns and Architecturesin TypeScript

Packt
01 Jun 2016
19 min read
In this article by Vilic Vane,author of the book TypeScript Design Patterns, we'll study architecture and patterns that are closely related to the language or its common applications. Many topics in this articleare related to asynchronous programming. We'll start from a web architecture for Node.js that's based on Promise. This is a larger topic that has interesting ideas involved, including abstractions of response and permission, as well as error handling tips. Then, we'll talk about how to organize modules with ES module syntax. Due to the limited length of this article, some of the related code is aggressively simplified, and nothing more than the idea itself can be applied practically. (For more resources related to this topic, see here.) Promise-based web architecture The most exciting thing for Promise may be the benefits brought to error handling. In a Promise-based architecture, throwing an error could be safe and pleasant. You don't have to explicitly handle errors when chaining asynchronous operations, and this makes it tougher for mistakes to occur. With the growing usage with ES2015 compatible runtimes, Promise has already been there out of the box. We have actually plenty of polyfills for Promises (including my ThenFail, written in TypeScript) as people who write JavaScript roughly, refer to the same group of people who create wheels. Promises work great with other Promises: A Promises/A+ compatible implementation should work with other Promises/A+ compatible implementations Promises do their best in a Promise-based architecture If you are new to Promise, you may complain about trying Promise with a callback-based project. You may intend to use helpers provided by Promise libraries, such asPromise.all, but it turns out that you have better alternatives,such as the async library. So, the reason that makes you decide to switch should not be these helpers (as there are a lot of them for callbacks).They should be because there's an easier way to handle errors or because you want to take the advantages of ES async and awaitfeatures which are based on Promise. Promisifying existing modules or libraries Though Promises do their best with a Promise-based architecture, it is still possible to begin using Promise with a smaller scope by promisifying existing modules or libraries. Taking Node.js style callbacks as an example, this is how we use them: import * as FS from 'fs';   FS.readFile('some-file.txt', 'utf-8', (error, text) => { if (error) {     console.error(error);     return; }   console.log('Content:', text); }); You may expect a promisified version of readFile to look like the following: FS .readFile('some-file.txt', 'utf-8') .then(text => {     console.log('Content:', text); }) .catch(reason => {     Console.error(reason); }); Implementing the promisified version of readFile can be easy as the following: function readFile(path: string, options: any): Promise<string> { return new Promise((resolve, reject) => {     FS.readFile(path, options, (error, result) => {         if (error) { reject(error);         } else {             resolve(result);         }     }); }); } I am using any here for parameter options to reduce the size of demo code, but I would suggest that you donot useany whenever possible in practice. There are libraries that are able to promisify methods automatically. Unfortunately, you may need to write declaration files yourself for the promisified methods if there is no declaration file of the promisified version that is available. Views and controllers in Express Many of us may have already been working with frameworks such as Express. This is how we render a view or send back JSON data in Express: import * as Path from 'path'; import * as express from 'express';   let app = express();   app.set('engine', 'hbs'); app.set('views', Path.join(__dirname, '../views'));   app.get('/page', (req, res) => {     res.render('page', {         title: 'Hello, Express!',         content: '...'     }); });   app.get('/data', (req, res) => {     res.json({         version: '0.0.0',         items: []     }); });   app.listen(1337); We will usuallyseparate controller from routing, as follows: import { Request, Response } from 'express';   export function page(req: Request, res: Response): void {     res.render('page', {         title: 'Hello, Express!',         content: '...'     }); } Thus, we may have a better idea of existing routes, and we may have controllers managed more easily. Furthermore, automated routing can be introduced so that we don't always need to update routing manually: import * as glob from 'glob';   let controllersDir = Path.join(__dirname, 'controllers');   let controllerPaths = glob.sync('**/*.js', {     cwd: controllersDir });   for (let path of controllerPaths) {     let controller = require(Path.join(controllersDir, path));     let urlPath = path.replace(/\/g, '/').replace(/.js$/, '');       for (let actionName of Object.keys(controller)) {         app.get(             `/${urlPath}/${actionName}`, controller[actionName] );     } } The preceding implementation is certainly too simple to cover daily usage. However, it displays the one rough idea of how automated routing could work: via conventions that are based on file structures. Now, if we are working with asynchronous code that is written in Promises, an action in the controller could be like the following: export function foo(req: Request, res: Response): void {     Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             res.render('foo', {                 post,                 comments             });         }); } We use destructuring of an array within a parameter. Promise.all returns a Promise of an array with elements corresponding to values of resolvablesthat are passed in. (A resolvable means a normal value or a Promise-like object that may resolve to a normal value.) However, this is not enough, we need to handle errors properly. Or in some case, the preceding code may fail in silence (which is terrible). In Express, when an error occurs, you should call next (the third argument that is passed into the callback) with the error object, as follows: import { Request, Response, NextFunction } from 'express';   export function foo( req: Request, res: Response, next: NextFunction ): void {     Promise         // ...         .catch(reason => next(reason)); } Now, we are fine with the correctness of this approach, but this is simply not how Promises work. Explicit error handling with callbacks could be eliminated in the scope of controllers, and the easiest way to do this is to return the Promise chain and hand over to code that was previously performing routing logic. So, the controller could be written like the following: export function foo(req: Request, res: Response) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             res.render('foo', {                 post,                 comments             });         }); } Or, can we make this even better? Abstraction of response We've already been returning a Promise to tell whether an error occurs. So, for a server error, the Promise actually indicates the result, or in other words, the response of the request. However, why we are still calling res.render()to render the view? The returned Promise object could be an abstraction of the response itself. Think about the following controller again: export class Response {}   export class PageResponse extends Response {     constructor(view: string, data: any) { } }   export function foo(req: Request) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             return new PageResponse('foo', {                 post,                 comments             });         }); } The response object that is returned could vary for a different response output. For example, it could be either a PageResponse like it is in the preceding example, a JSONResponse, a StreamResponse, or even a simple Redirection. As in most of the cases, PageResponse or JSONResponse is applied, and the view of a PageResponse can usually be implied with the controller path and action name.It is useful to have these two responses automatically generated from a plain data object with proper view to render with, as follows: export function foo(req: Request) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             return {                 post,                 comments             };         }); } This is how a Promise-based controller should respond. With this idea in mind, let's update the routing code with an abstraction of responses. Previously, we were passing controller actions directly as Express request handlers. Now, we need to do some wrapping up with the actions by resolving the return value, and applying operations that are based on the resolved result, as follows: If it fulfills and it's an instance of Response, apply it to the resobjectthat is passed in by Express. If it fulfills and it's a plain object, construct a PageResponse or a JSONResponse if no view found and apply it to the resobject. If it rejects, call thenext function using this reason. As seen previously,our code was like the following: app.get(`/${urlPath}/${actionName}`, controller[actionName]); Now, it gets a little bit more lines, as follows: let action = controller[actionName];   app.get(`/${urlPath}/${actionName}`, (req, res, next) => {     Promise         .resolve(action(req))         .then(result => {             if (result instanceof Response) {                 result.applyTo(res);             } else if (existsView(actionName)) {                 new PageResponse(actionName, result).applyTo(res);             } else {                 new JSONResponse(result).applyTo(res);             }         })         .catch(reason => next(reason)); });   However, so far we can only handle GET requests as we hardcoded app.get() in our router implementation. The poor view matching logic can hardly be used in practice either. We need to make these actions configurable, and ES decorators could perform a good job here: export default class Controller { @get({     View: 'custom-view-path' })     foo(req: Request) {         return {             title: 'Action foo',             content: 'Content of action foo'         };     } } I'll leave the implementation to you, and feel free to make them awesome. Abstraction of permission Permission plays an important role in a project, especially in systems that have different user groups. For example, a forum. The abstraction of permission should be extendable to satisfy changing requirements, and it should be easy to use as well. Here, we are going to talk about the abstraction of permission in the level of controller actions. Consider the legibility of performing one or more actions a privilege. The permission of a user may consist of several privileges, and usually most of the users at the same level would have the same set of privileges. So, we may have a larger concept, namely groups. The abstraction could either work based on both groups and privileges, or work based on only privileges (groups are now just aliases to sets of privileges): Abstraction that validates based on privileges and groups at the same time is easier to build. You do not need to create a large list of which actions can be performed for a certain group of user, as granular privileges are only required when necessary. Abstraction that validates based on privileges has better control and more flexibility to describe the permission. For example, you can remove a small set of privileges from the permission of a user easily. However, both approaches have similar upper-level abstractions, and they differ mostly on implementations. The general structure of the permission abstractions that we've talked about is like in the following diagram: The participants include the following: Privilege: This describes detailed privilege corresponding to specific actions Group: This defines a set of privileges Permission: This describes what a user is capable of doing, consist of groups that the user belongs to, and the privileges that the user has. Permission descriptor: This describes how the permission of a user works and consists of possible groups and privileges. Expected errors A great concern that was wiped away after using Promises is that we do not need to worry about whether throwing an error in a callback would crash the application most of the time. The error will flow through the Promises chain and if not caught, it will be handled by our router. Errors can be roughly divided as expected errors and unexpected errors. Expected errors are usually caused by incorrect input or foreseeable exceptions, and unexpected errors are usually caused by bugs or other libraries that the project relies on. For expected errors, we usually want to give users a friendly response with readable error messages and codes. So that the user can help themselves searching the error or report to us with useful context. For unexpected errors, we would also want a reasonable response (usually a message described as an unknown error), a detailed server-side log (including real error name, message, stack information, and so on), and even alerts to let the team know as soon as possible. Defining and throwing expected errors The router will need to handle different types of errors, and an easy way to achieve this is to subclass a universal ExpectedError class and throw its instances out, as follows: import ExtendableError from 'extendable-error';   class ExpectedError extends ExtendableError { constructor(     message: string,     public code: number ) {     super(message); } } The extendable-error is a package of mine that handles stack trace and themessage property. You can directly extend Error class as well. Thus, when receiving an expected error, we can safely output the error name and message as part of the response. If this is not an instance of ExpectedError, we can display predefined unknown error messages. Transforming errors Some errors such as errors that are caused by unstable networks or remote services are expected.We may want to catch these errors and throw them out again as expected errors. However, it could be rather trivial to actually do this. A centralized error transforming process can then be applied to reduce the efforts required to manage these errors. The transforming process includes two parts: filtering (or matching) and transforming. These are the approaches to filter errors: Filter by error class: Many third party libraries throws error of certain class. Taking Sequelize (a popular Node.js ORM) as an example, it has DatabaseError, ConnectionError, ValidationError, and so on. By filtering errors by checking whether they are instances of a certain error class, we may easily pick up target errors from the pile. Filter by string or regular expression: Sometimes a library might be throw errors that are instances of theError class itself instead of its subclasses.This makes these errors hard to distinguish from others. In this situation, we can filter these errors by their message with keywords or regular expressions. Filter by scope: It's possible that instances of the same error class with the same error message should result in a different response. One of the reasons may be that the operation throwing a certain error is at a lower-level, but it is being used by upper structures within different scopes. Thus, a scope mark can be added for these errors and make it easier to be filtered. There could be more ways to filter errors, and they are usually able to cooperate as well. By properly applying these filters and transforming errors, we can reduce noises, analyze what's going on within a system,and locate problems faster if they occur. Modularizing project Before ES2015, there are actually a lot of module solutions for JavaScript that work. The most famous two of them might be AMD and CommonJS. AMD is designed for asynchronous module loading, which is mostly applied in browsers. While CommonJSperforms module loading synchronously, and this is the way that the Node.js module system works. To make it work asynchronously, writing an AMD module takes more characters. Due to the popularity of tools, such asbrowserify and webpack, CommonJS becomes popular even for browser projects. Proper granularity of internal modules can help a project keep a healthy structure. Consider project structure like the following: project├─controllers├─core│  │ index.ts│  ││  ├─product│  │   index.ts│  │   order.ts│  │   shipping.ts│  ││  └─user│      index.ts│      account.ts│      statistics.ts│├─helpers├─models├─utils└─views Let's assume that we are writing a controller file that's going to import a module defined by thecore/product/order.ts file. Previously, usingCommonJS style'srequire, we would write the following: const Order = require('../core/product/order'); Now, with the new ES import syntax, this would be like the following: import * as Order from '../core/product/order'; Wait, isn't this essentially the same? Sort of. However, you may have noticed several index.ts files that I've put into folders. Now, in the core/product/index.tsfile, we could have the following: import * as Order from './order'; import * as Shipping from './shipping';   export { Order, Shipping } Or, we could also have the following: export * from './order'; export * from './shipping'; What's the difference? The ideal behind these two approaches of re-exporting modules can vary. The first style works better when we treat Order and Shipping as namespaces, under which the identifier names may not be easy to distinguish from one another. With this style, the files are the natural boundaries of building these namespaces. The second style weakens the namespace property of two files, and then uses them as tools to organize objects and classes under the same larger category. A good thingabout using these files as namespaces is that multiple-level re-exporting is fine, while weakening namespaces makes it harder to understand different identifier names as the number of re-exporting levels grows. Summary In this article, we discussed some interesting ideas and an architecture formed by these ideas. Most of these topics focused on limited examples, and did their own jobs.However, we also discussed ideas about putting a whole system together. Resources for Article: Further resources on this subject: Introducing Object Oriented Programmng with TypeScript [article] Writing SOLID JavaScript code with TypeScript [article] Optimizing JavaScript for iOS Hybrid Apps [article]
Read more
  • 0
  • 0
  • 32879

article-image-classifier-construction
Packt
01 Jun 2016
8 min read
Save for later

Classifier Construction

Packt
01 Jun 2016
8 min read
In this article by Pratik Joshi, author of the book Python Machine Learning Cookbook, we will build a simple classifier using supervised learning, and then go onto build a logistic-regression classifier. Building a simple classifier In the field of machine learning, classification refers to the process of using the characteristics of data to separate it into a certain number of classes. A supervised learning classifier builds a model using labeled training data, and then uses this model to classify unknown data. Let's take a look at how to build a simple classifier. (For more resources related to this topic, see here.) How to do it… Before we begin, make sure thatyou have imported thenumpy and matplotlib.pyplot packages. After this, let's create some sample data: X = np.array([[3,1], [2,5], [1,8], [6,4], [5,2], [3,5], [4,7], [4,-1]]) Let's assign some labels to these points: y = [0, 1, 1, 0, 0, 1, 1, 0] As we have only two classes, the list y contains 0s and 1s. In general, if you have N classes, then the values in y will range from 0 to N-1. Let's separate the data into classes that are based on the labels: class_0 = np.array([X[i] for i in range(len(X)) if y[i]==0]) class_1 = np.array([X[i] for i in range(len(X)) if y[i]==1]) To get an idea about our data, let's plot this, as follows: plt.figure() plt.scatter(class_0[:,0], class_0[:,1], color='black', marker='s') plt.scatter(class_1[:,0], class_1[:,1], color='black', marker='x') This is a scatterplot where we use squares and crosses to plot the points. In this context,the marker parameter specifies the shape that you want to use. We usesquares to denote points in class_0 and crosses to denote points in class_1. If you run this code, you will see the following figure: In the preceding two lines, we just use the mapping between X and y to create two lists. If you were asked to inspect the datapoints visually and draw a separating line, what would you do? You would simply draw a line in between them. Let's go ahead and do this: line_x = range(10) line_y = line_x We just created a line with the mathematical equation,y = x. Let's plot this, as follows: plt.figure() plt.scatter(class_0[:,0], class_0[:,1], color='black', marker='s') plt.scatter(class_1[:,0], class_1[:,1], color='black', marker='x') plt.plot(line_x, line_y, color='black', linewidth=3) plt.show() If you run this code, you should see the following figure: There's more… We built a really simple classifier using the following rule: the input point (a, b) belongs to class_0 if a is greater than or equal tob;otherwise, it belongs to class_1. If you inspect the points one by one, you will see that this is true. This is it! You just built a linear classifier that can classify unknown data. It's a linear classifier because the separating line is a straight line. If it's a curve, then it becomes a nonlinear classifier. This formation worked fine because there were a limited number of points, and we could visually inspect them. What if there are thousands of points? How do we generalize this process? Let's discuss this in the next section. Building a logistic regression classifier Despite the word regression being present in the name, logistic regression is actually used for classification purposes. Given a set of datapoints, our goal is to build a model that can draw linear boundaries between our classes. It extracts these boundaries by solving a set of equations derived from the training data. Let's see how to do that in Python: We will use the logistic_regression.pyfile that is already provided to you as a reference. Assuming that you have imported the necessary packages, let's create some sample data along with training labels: X = np.array([[4, 7], [3.5, 8], [3.1, 6.2], [0.5, 1], [1, 2], [1.2, 1.9], [6, 2], [5.7, 1.5], [5.4, 2.2]]) y = np.array([0, 0, 0, 1, 1, 1, 2, 2, 2]) Here, we assume that we have three classes. Let's initialize the logistic regression classifier: classifier = linear_model.LogisticRegression(solver='liblinear', C=100) There are a number of input parameters that can be specified for the preceding function, but a couple of important ones are solver and C. The solverparameter specifies the type of solver that the algorithm will use to solve the system of equations. The C parameter controls the regularization strength. A lower value indicates higher regularization strength. Let's train the classifier: classifier.fit(X, y) Let's draw datapoints and boundaries: plot_classifier(classifier, X, y) We need to define this function: def plot_classifier(classifier, X, y):     # define ranges to plot the figure     x_min, x_max = min(X[:, 0]) - 1.0, max(X[:, 0]) + 1.0     y_min, y_max = min(X[:, 1]) - 1.0, max(X[:, 1]) + 1.0 The preceding values indicate the range of values that we want to use in our figure. These values usually range from the minimum value to the maximum value present in our data. We add some buffers, such as 1.0 in the preceding lines, for clarity. In order to plot the boundaries, we need to evaluate the function across a grid of points and plot it. Let's go ahead and define the grid: # denotes the step size that will be used in the mesh grid     step_size = 0.01       # define the mesh grid     x_values, y_values = np.meshgrid(np.arange(x_min, x_max, step_size), np.arange(y_min, y_max, step_size)) The x_values and y_valuesvariables contain the grid of points where the function will be evaluated. Let's compute the output of the classifier for all these points: # compute the classifier output     mesh_output = classifier.predict(np.c_[x_values.ravel(), y_values.ravel()])       # reshape the array     mesh_output = mesh_output.reshape(x_values.shape) Let's plot the boundaries using colored regions: # Plot the output using a colored plot     plt.figure()       # choose a color scheme     plt.pcolormesh(x_values, y_values, mesh_output, cmap=plt.cm.Set1) This is basically a 3D plotter that takes the 2D points and the associated values to draw different regions using a color scheme. You can find all the color scheme options athttp://matplotlib.org/examples/color/colormaps_reference.html. Let's overlay the training points on the plot: plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='black', linewidth=2, cmap=plt.cm.Paired)       # specify the boundaries of the figure     plt.xlim(x_values.min(), x_values.max())     plt.ylim(y_values.min(), y_values.max())       # specify the ticks on the X and Y axes     plt.xticks((np.arange(int(min(X[:, 0])-1), int(max(X[:, 0])+1), 1.0)))     plt.yticks((np.arange(int(min(X[:, 1])-1), int(max(X[:, 1])+1), 1.0)))       plt.show() Here, plt.scatter plots the points on the 2D graph. TheX[:, 0] specifies that we should take all the values along axis 0 (X-axis in our case), and X[:, 1] specifies axis 1 (Y-axis). The c=y parameter indicates the color sequence. We use the target labels to map to colors using cmap. We basically want different colors based on the target labels; hence, we use y as the mapping. The limits of the display figure are set using plt.xlim and plt.ylim. In order to mark the axes with values, we need to use plt.xticks and plt.yticks. These functions mark the axes with values so that it's easier for us to see where the points are located. In the preceding code, we want the ticks to lie between the minimum and maximum values with a buffer of 1 unit. We also want these ticks to be integers. So, we use theint() function to round off the values. If you run this code, you should see the following output: Let's see how the Cparameter affects our model. The C parameter indicates the penalty for misclassification. If we set this to 1.0, we will get the following figure: If we set C to 10000, we get the following figure: As we increase C, there is a higher penalty for misclassification. Hence, the boundaries get more optimal. Summary We successfully employed supervised learning to build a simple classifier. We subsequently went on to construct a logistic-regression classifier and saw different results of tweaking C—the regularization strength parameter. Resources for Article:   Further resources on this subject: Python Scripting Essentials [article] Web scraping with Python (Part 2) [article] Web Server Development [article]
Read more
  • 0
  • 0
  • 2323

article-image-webhooks-slack
Packt
01 Jun 2016
11 min read
Save for later

Webhooks in Slack

Packt
01 Jun 2016
11 min read
In this article by Paul Asjes, the author of the book, Building Slack Bots, we'll have a look at webhooks in Slack. (For more resources related to this topic, see here.) Slack is a great way of communicating at your work environment—it's easy to use, intuitive, and highly extensible. Did you know that you can make Slack do even more for you and your team by developing your own bots? This article will teach you how to implement incoming and outgoing webhooks for Slack, supercharging your Slack team into even greater levels of productivity. The programming language we'll use here is JavaScript; however, webhooks can be programmed with any language capable of HTTP requests. Webhooks First let's talk basics: a webhook is a way of altering or augmenting a web application through HTTP methods. Webhooks allow us to post messages to and from Slack using regular HTTP requests with a JSON payloads. What makes a webhook a bot is its ability to post messages to Slack as if it were a bot user. These webhooks can be divided into incoming and outgoing webhooks, each with their own purposes and uses. Incoming webhooks An example of an incoming webhook is a service that relays information from an external source to a Slack channel without being explicitly requested, such as GitHub Slack integration: The GitHub integration posts messages about repositories we are interested in In the preceding screenshot, we see how a message was sent to Slack after a new branch was made on a repository this team was watching. This data wasn't explicitly requested by a team member but automatically sent to the channel as a result of the incoming webhook. Other popular examples include Jenkins integration, where infrastructure changes can be monitored in Slack (for example, if a server watched by Jenkins goes down, a warning message can be posted immediately to a relevant Slack channel). Let's start with setting up an incoming webhook that sends a simple "Hello world" message: First, navigate to the Custom Integration Slack team page, as shown in the following screenshot (https://my.slack.com/apps/build/custom-integration): The various flavors of custom integrations Select Incoming WebHooks from the list and then select which channel you'd like your webhook app to post messages to: Webhook apps will post to a channel of your choosing Once you've clicked on the Add Incoming WebHooks integration button, you will be presented with this options page, which allows you to customize your integration a little further: Names, descriptions, and icons can be set from this menu Set a customized icon for your integration (for this example, the wave emoji was used) and copy down the webhook URL, which has the following format:https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX This generated URL is unique to your team, meaning that any JSON payloads sent via this URL will only appear in your team's Slack channels. Now, let's throw together a quick test of our incoming webhook in Node. Start a new Node project (remember: you can use npm init to create your package.json file) and install the superagent AJAX library by running the following command in your terminal: npm install superagent –save Create a file named index.js and paste the following JavaScript code within it: const WEBHOOK_URL = [YOUR_WEBHOOK_URL]; const request = require('superagent'); request .post(WEBHOOK_URL) .send({ text: 'Hello! I am an incoming Webhook bot!' }) .end((err, res) => { console.log(res); }); Remember to replace [YOUR_WEBHOOK_URL] with your newly generated URL, and then run the program by executing the following command: nodemon index.js Two things should happen now: firstly, a long response should be logged in your terminal, and secondly, you should see a message like the following in the Slack client: The incoming webhook equivalent of "hello world" The res object we logged in our terminal is the response from the AJAX request. Taking the form of a large JavaScript object, it displays information about the HTTP POST request we made to our webhook URL. Looking at the message received in the Slack client, notice how the name and icon are the same ones we set in our integration setup on the team admin site. Remember that the default icon, name, and channel are used if none are provided, so let's see what happens when we change that. Replace your request AJAX call in index.js with the following: request .post(WEBHOOK_URL) .send({ username: "Incoming bot", channel: "#general", icon_emoji: ":+1:", text: 'Hello! I am different from the previous bot!' }) .end((err, res) => { console.log(res); }); Save the file, and nodemon will automatically restart the program. Switch over to the Slack client and you should see a message like the following pop up in your #general channel: New name, icon, and message In place of icon_emoji, you could also use icon_url to link to a specific image of your choosing. If you wish your message to be sent only to one user, you can supply a username as the value for the channel property: channel: "@paul" This will cause the message to be sent from within the Slackbot direct message. The message's icon and username will match either what you configured in the setup or set in the body of the POST request. Finally, let's look at sending links in our integration. Replace the text property with the following and save index.js: text: 'Hello! Here is a fun link: <http://www.github.com|Github is great!>' Slack will automatically parse any links it finds, whether it's in the http://www.example.com or www.example.com formats. By enclosing the URL in angled brackets and using the | character, we can specify what we would like the URL to be shown as: Formatted links are easier to read than long URLs For more information on message formatting, visit https://api.slack.com/docs/formatting. Note that as this is a custom webhook integration, we can change the name, icon, and channel of the integration. If we were to package the integration as a Slack app (an app installable by other teams), then it is not possible to override the default channel, username, and icon set. Incoming webhooks are triggered by external sources; an example would be when a new user signs up to your service or a product is sold. The goal of the incoming webhook is to provide information to your team that is easy to reach and comprehend. The opposite of this would be if you wanted users to get data out of Slack, which can be done via the medium of outgoing webhooks. Outgoing webhooks Outgoing webhooks differ from the incoming variety in that they send data out of Slack and to a service of your choosing, which in turn can respond with a message to the Slack channel. To set up an outgoing webhook, visit the custom integration page of your Slack team's admin page again—https://my.slack.com/apps/build/custom-integration—and this time, select the Outgoing WebHooks option. On the next screen, be sure to select a channel, name, and icon. Notice how there is a target URL field to be filled in; we will fill this out shortly. When an outgoing webhook is triggered in Slack, an HTTP POST request is made to the URL (or URLs, as you can specify multiple ones) you provide. So first, we need to build a server that can accept our webhook. In index.js, paste the following code: 'use strict'; const http = require('http'); // create a simple server with node's built in http module http.createServer((req, res) => { res.writeHead(200, {'Content-Type': 'text/plain'}); // get the data embedded in the POST request req.on('data', (chunk) => { // chunk is a buffer, so first convert it to // a string and split it to make it more legible as an array console.log('Body:', chunk.toString().split('&')); }); // create a response let response = JSON.stringify({ text: 'Outgoing webhook received!' }); // send the response to Slack as a message res.end(response); }).listen(8080, '0.0.0.0'); console.log('Server running at http://0.0.0.0:8080/'); Notice how we require the http module despite not installing it with NPM. This is because the http module is a core Node dependency and is automatically included with your installation of Node. In this block of code, we start a simple server on port 8080 and listen for incoming requests. In this example, we set our server to run at 0.0.0.0 rather than localhost. This is important as Slack is sending a request to our server, so it needs to be accessible from the Internet. Setting the IP of your server to 0.0.0.0 tells Node to use your computer's network-assigned IP address. Therefore, if you set the IP of your server to 0.0.0.0, Slack can reach your server by hitting your IP on port 8080 (for example, http://123.456.78.90:8080). If you are having trouble with Slack reaching your server, it is most likely because you are behind a router or firewall. To circumvent this issue, you can use a service such as ngrok (https://ngrok.com/). Alternatively, look at port forwarding settings for your router or firewall. Let's update our outgoing webhook settings accordingly: The outgoing webhook settings, with a destination URL Save your settings and run your Node app; test whether the outgoing webhook works by typing a message into the channel you specified in the webhook's settings. You should then see something like this in Slack: We built a spam bot Well, the good news is that our server is receiving requests and returning a message to send to Slack each time. The issue here is that we skipped over the Trigger Word(s) field in the webhook settings page. Without a trigger word, any message sent to the specified channel will trigger the outgoing webhook. This causes our webhook to be triggered by a message sent by the outgoing webhook in the first place, creating an infinite loop. To fix this, we could do one of two things: Refrain from returning a message to the channel when listening to all the channel's messages. Specify one or more trigger words to ensure we don't spam the channel. Returning a message is optional yet encouraged to ensure a better user experience. Even a confirmation message such as Message received! is better than no message as it confirms to the user that their message was received and is being processed. Let's therefore presume we prefer the second option, and add a trigger word: Trigger words keep our webhooks organized Let's try that again, this time sending a message with the trigger word at the beginning of the message. Restart your Node app and send a new message: Our outgoing webhook app now functions a lot like our bots from earlier Great, now switch over to your terminal and see what that message logged: Body: [ 'token=KJcfN8xakBegb5RReelRKJng', 'team_id=T000001', 'team_domain=buildingbots', 'service_id=34210109492', 'channel_id=C0J4E5SG6', 'channel_name=bot-test', 'timestamp=1460684994.000598', 'user_id=U0HKKH1TR', 'user_name=paul', 'text=webhook+hi+bot%21', 'trigger_word=webhook' ] This array contains the body of the HTTP POST request sent by Slack; in it, we have some useful data, such as the user's name, the message sent, and the team ID. We can use this data to customize the response or to perform some validation to make sure the user is authorized to use this webhook. In our response, we simply sent back a Message received string; however, like with incoming webhooks, we can set our own username and icon. The channel cannot be different from the channel specified in the webhook's settings, however. The same restrictions apply when the webhook is not a custom integration. This means that if the webhook was installed as a Slack app for another team, it can only post messages as the username and icon specified in the setup screen. An important thing to note is that webhooks, either incoming or outgoing, can only be set up in public channels. This is predominantly to discourage abuse and uphold privacy, as we've seen that it's simple to set up a webhook that can record all the activity on a channel. Summary In this article, you learned what webhooks are and how you can use them to get data in and out of Slack. You learned how to send messages as a bot user and how to interact with your users in the native Slack client. Resources for Article: Further resources on this subject: Keystone – OpenStack Identity Service[article] A Sample LEMP Stack[article] Implementing Stacks using JavaScript[article]
Read more
  • 0
  • 0
  • 14392

article-image-holistic-view-spark
Packt
31 May 2016
19 min read
Save for later

Holistic View on Spark

Packt
31 May 2016
19 min read
In this article by Alex Liu, author of the book Apache Spark Machine Learning Blueprints, the author talks about a new stage of utilizing Apache Spark-based systems to turn data into insights. (For more resources related to this topic, see here.) According to research done by Gartner and others, many organizations lost a huge amount of value only due to the lack of a holistic view of their business. In this article, we will review the machine learning methods and the processes of obtaining a holistic view of business. Then, we will discuss how Apache Spark fits in to make the related computing easy and fast, and at the same time, with one real-life example, illustrate this process of developing holistic views from data using Apache Spark computing, step by step as follows: Spark for a holistic view Methods for a holistic view Feature preparation Model estimation Model evaluation Results Explanation Deployment Spark for holistic view Spark is very suitable for machine-learning projects such as ours to obtain a holistic view of business as it enables us to process huge amounts of data fast, and it enables us to code complicated computation easily. In this section, we will first describe a real business case and then describe how to prepare the Spark computing for our project. The use case The company IFS sells and distributes thousands of IT products and has a lot of data on marketing, training, team management, promotion, and products. The company wants to understand how various kinds of actions, such as that in marketing and training, affect sales teams’ success. In other words, IFS is interested in finding out how much impact marketing, training, or promotions have generated separately. In the past, IFS has done a lot of analytical work, but all of it was completed by individual departments on soloed data sets. That is, they have analytical results about how marketing affects sales from using marketing data alone, and how training affects sales from analyzing training data alone. When the decision makers collected all the results together and prepared to make use of them, they found that some of the results were contradicting with each other. For example, when they added all the effects together, the total impacts were beyond their intuitively imaged. This is a typical problem that every organization is facing. A soloed approach with soloed data will produce an incomplete view, and also an often biased view, or even conflicting views. To solve this problem, analytical teams need to take a holistic view of all the company data, and gather all the data into one place, and then utilize new machine learning approaches to gain a holistic view of business. To do so, companies also need to care for the following: The completeness of causes Advanced analytics to account for the complexity of relationships Computing the complexity related to subgroups and a big number of products or services For this example, we have eight datasets that include one dataset for marketing with 48 features, one dataset for training with 56 features, and one dataset for team administration with 73 features, with the following table as a complete summary: Category Number of Features Team 73 Marketing 48 Training 56 Staffing 103 Product 77 Promotion 43 Total 400 In this company, researchers understood that pooling all the data sets together and building a complete model was the solution, but they were not able to achieve it for several reasons. Besides organizational issues inside the corporation, tech capability to store all the data, to process all the data quickly with the right methods, and to present all the results in the right ways with reasonable speed were other challenges. At the same time, the company has more than 100 products to offer for which data was pooled together to study impacts of company interventions. That is, calculated impacts are average impacts, but variations among products are too large to ignore. If we need to assess impacts for each product, parallel computing is preferred and needs to be implemented at good speed. Without utilizing a good computing platform such as Apache Spark meeting the requirements that were just described is a big challenge for this company. In the sections that follow, we will use modern machine learning on top of Apache Spark to attack this business use case and help the company to gain a holistic view of their business. In order to help readers learn machine learning on Spark effectively, discussions in the following sections are all based on work about this real business use case that was just described. But, we left some details out to protect the company’s privacy and also to keep everything brief. Distributed computing For our project, parallel computing is needed for which we should set up clusters and worker notes. Then, we can use the driver program and cluster manager to manage the computing that has to be done in each worker node. As an example, let's assume that we choose to work within Databricks’ environment: The users can go to the main menu, as shown in the preceding screenshot, click Clusters. A Window will open for users to name the cluster, select a version of Spark, and then specify number of workers. Once the clusters are created, we can go to the main menu, click the down arrow on the right-hand side of Tables. We then choose Create Tables to import our datasets that were cleaned and prepared. For the data source, the options include S3, DBFS, JDBC, and File (for local fields). Our data has been separated into two subsets, one to train and one to test each product, as we need to train a few models per product. In Apache Spark, we need to direct workers to complete computation on each note. We will use scheduler to get Notebook computation completed on Databricks, and collect the results back, which will be discussed in the Model Estimation section. Fast and easy computing One of the most important advantages of utilizing Apache Spark is to make coding easy for which several approaches are available. Here for this project, we will focus our effort on the notebook approach, and specifically, we will use the R notebooks to develop and organize codes. At the same time, with an effort to illustrate the Spark technology more thoroughly, we will also use MLlib directly to code some of our needed algorithms as MLlib has been seamlessly integrated with Spark. In the Databricks’ environment, setting up notebooks will take the following steps: As shown in the preceding screenshot, users can go to the Databricks main menu, click the down arrow on the right-hand side of Workspace, and choose Create -> Notebook to create a new notebook. A table will pop up for users to name the notebook and also select a language (R, Python, Scala, or SQL). In order to make our work repeatable and also easy to understand, we will adopt a workflow approach that is consistent with the RM4Es framework. We will also adopt Spark’s ML Pipeline tools to represent our workflows whenever possible. Specifically, for the training data set, we need to estimate models, evaluate models, then maybe to re-estimate the models again before we can finalize our models. So, we need to use Spark’s Transformer, Estimator, and Evaluator to organize an ML pipeline for this project. In practice, we can also organize these workflows within the R notebook environment. For more information about pipeline programming, please go to http://spark.apache.org/docs/latest/ml-guide.html#example-pipeline.  & http://spark.apache.org/docs/latest/ml-guide.html Once our computing platform is set up and our framework is cleared, everything becomes clear too. In the following sections, we will move forward step by step. We will use our RM4Es framework and related processes to identity equations or methods and then prepare features first. The second step is to complete model estimations, the third is to evaluate models, and the fourth is to explain our results. Finally, we will deploy the models. Methods for a holistic view In this section, we need to select our analytical methods or models (equations), which is to complete a task of mapping our business use case to machine learning methods. For our use case of assessing impacts of various factors on sales team success, there are many suitable models for us to use. As an exercise, we will select: regression models, structural equation models, and decision trees, mainly for their easiness to interpret as well as their implementation on Spark. Once we finalize our decision for analytical methods or models, we will need to prepare the dependent variable and also prepare to code, which we will discuss one by one. Regression modeling To get ready for regression modeling on Spark, there are three issues that you have to take care of, as follows: Linear regression or logistic regression: Regression is the most mature and also most widely-used model to represent the impacts of various factors on one dependent variable. Whether to use linear regression or logistic regression depends on whether the relationship is linear or not. We are not sure about this, so we will use adopt both and then compare their results to decide on which one to deploy. Preparing the dependent variable: In order to use logistic regression, we need to recode the target variable or dependent variable (the sales team success variable now with a rating from 0 to 100) to be 0 versus 1 by separating it with the medium value. Preparing coding: In MLlib, we can use the following codes for regression modeling as we will use Spark MLlib’s Linear Regression with Stochastic Gradient Descent (LinearRegressionWithSGD): val numIterations = 90 val model = LinearRegressionWithSGD.train(TrainingData, numIterations) For logistic regression, we use the following codes: val model = new LogisticRegressionWithSGD() .setNumClasses(2) .run(training) For more about using MLlib for regression modeling, please go to: http://spark.apache.org/docs/latest/mllib-linear-methods.html#linear-least-squares-lasso-and-ridge-regression In R, we can use the lm function for linear regression, and the glm function for logistic regression with family=binomial(). SEM aproach To get ready for Structural Equation Modeling (SEM) on Spark, there are also three issues that we need to take care of as follows: SEM introduction specification: SEM may be considered as an extension of regression modeling, as it consists of several linear equations that are similar to regression equations. But, this method estimates all the equations at the same time regarding their internal relations, so it is less biased than regression modeling. SEM consists of both structural modeling and latent variable modeling, but for us, we will only use structural modeling. Preparing dependent variable: We can just use the sales team success scale (rating of 0 to 100) as our target variable here. Preparing coding: We will adopt the R notebook within the Databricks environment, for which we should use the R package SEM. There are also other SEM packages, such as lavaan, that are available to use, but for this project, we will use the SEM package for its easiness to learn. Loading SEM package into the R notebook, we will use install.packages("sem", repos="http://R-Forge.R-project.org"). Then, we need to perform the R code of library(sem). After that, we need to use the specify.model() function to write some codes to specify models into our R notebook, for which the following codes are needed: mod.no1 <- specifyModel() s1 <- x1, gam31 s1 <- x2, gam32 Decision trees To get ready for the decision tree modeling on Spark, there are also three issues that we need to take care of as follows: Decision tree selection: Decision tree aims to model classifying cases, which is about classifying elements into success or not success for our use case. It is also one of the most mature and widely-used methods. For this exercise, we will only use the simple linear decision tree, and we will not venture into any more complicated trees, such as random forest. Prepare the dependent variable: To use the decision tree model here, we will separate the sales team rating into two categories of SUCCESS or NOT as we did for logistic regression. Prepare coding: For MLlib, we can use the following codes: val numClasses = 2 val categoricalFeaturesInfo = Map[Int, Int]() val impurity = "gini" val maxDepth = 6 val maxBins = 32 val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins)   For more information about using MLlib for decision tree, please go to: http://spark.apache.org/docs/latest/mllib-decision-tree.html As for the R notebook on Spark, we need to use an R package of rpart, and then use the rpart functions for all the calculation. For rpart, we need to specify the classifier and also all the features that have to be used. Model estimation Once feature sets get finalized, what follows is to estimate the parameters of the selected models. We can use either MLlib or R here to do this, and we need to arrange distributed computing. To simplify this, we can utilize the Databricks’ Job feature. Specifically, within the Databricks environment, we can go to Jobs, then create jobs, as shown in the following screenshot: Then, users can select what notebooks to run, specify clusters, and then schedule jobs. Once scheduled, users can also monitor the running notebooks, and then collect results back. In Section II, we prepared some codes for each of the three models that were selected. Now, we need to modify them with the final set of features selected in the last section, so to create our final notebooks. In other words, we have one dependent variable prepared, and 17 features selected out from our PCA and feature selection work. So, we need to insert all of them into the codes that were developed in Section II to finalize our notebook. Then, we will use Spark Job feature to get these notebooks implemented in a distributed way. MLlib implementation First, we need to prepare our data with the s1 dependent variable for linear regression, and the s2 dependent variable for logistic regression or decision tree. Then, we need to add the selected 17 features into them to form datasets that are ready for our use. For linear regression, we will use the following code: val numIterations = 90 val model = LinearRegressionWithSGD.train(TrainingData, numIterations) For logistic regression, we will use the following code: val model = new LogisticRegressionWithSGD() .setNumClasses(2) For Decision tree, we will use the following code: val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins) R notebooks implementation For better comparison, it is a good idea to write linear regression and SEM into the same R notebook and also write logistic regression and Decision tree into the same R notebook. Then, the main task left here is to schedule the estimation for each worker, and then collect the results, using the previously mentioned Job feature in Databricks environment as follows: The code for linear regression and SEM is as follows: lm.est1 <- lm(s1 ~ T1+T2+M1+ M2+ M3+ Tr1+ Tr2+ Tr3+ S1+ S2+ P1+ P2+ P3+ P4+ Pr1+ Pr2+ Pr3) mod.no1 <- specifyModel() s1 <- x1, gam31 s1 <- x2, gam32 The code for logistic regression and Decision tree is as follows: logit.est1 <- glm(s2~ T1+T2+M1+ M2+ M3+ Tr1+ Tr2+ Tr3+ S1+ S2+ P1+ P2+ P3+ P4+ Pr1+ Pr2+ Pr3,family=binomial()) dt.est1 <- rpart(s2~ T1+T2+M1+ M2+ M3+ Tr1+ Tr2+ Tr3+ S1+ S2+ P1+ P2+ P3+ P4+ Pr1+ Pr2+ Pr3, method="class") After we get all models estimated as per each product, for simplicity, we will focus on one product to complete our discussion on model evaluation and model deployment. Model evaluation In the previous section, we completed our model estimation task. Now, it is time for us to evaluate the estimated models to see whether they meet our model quality criteria so that we can either move to our next stage for results explanation or go back to some previous stages to refine our models. To perform our model evaluation, in this section, we will focus our effort on utilizing RMSE (root-mean-square error) and ROC Curves (receiver operating characteristic) to assess whether our models are a good fit. To calculate RMSEs and ROC Curves, we need to use our test data rather than the training data that was used to estimate our models. Quick evaluations Many packages have already included some algorithms for users to assess models quickly. For example, both MLlib and R have algorithms to return a confusion matrix for logistic regression models, and they even get false positive numbers calculated. Specifically, MLlib has functions of confusionMatrixand numFalseNegatives() for us to use, and even some algorithms to calculate MSE quickly as follows: MSE = valuesAndPreds.(lambda (v, p): (v - p)**2).mean() print("Mean Squared Error = " + str(MSE)) Also, R has a function of confusion.matrix for us to use. In R, there are even many tools to produce some quick graphical plots that can be used to gain a quick evaluation of models. For example, we can perform plots of predicted versus actual values, and also residuals on predicted values. Intuitively, the methods of comparing predicted versus actual values are the easiest to understand and give us a quick model evaluation. The following table is a calculated confusion matrix for one of the company products, which shows a reasonable fit of our model.   Predicted as Success Predicted as NOT Actual Success 83% 17% Actual Not 9% 91% RMSE In MLlib, we can use the following codes to calculate RMSE: val valuesAndPreds = test.map { point => val prediction = new_model.predict(point.features) val r = (point.label, prediction) r } val residuals = valuesAndPreds.map {case (v, p) => math.pow((v - p), 2)} val MSE = residuals.mean(); val RMSE = math.pow(MSE, 0.5) Besides the above, MLlib also has some functions in the RegressionMetrics and RankingMetrics classes for us to use for RMSE calculation. In R, we can compute RMSE as follows: RMSE <- sqrt(mean((y-y_pred)^2)) Before this, we need to obtain the predicted values with the following commands: > # build a model > RMSElinreg <- lm(s1 ~ . ,data= data1) > #score the model > score <- predict(RMSElinreg, data2) After we have obtained RMSE values for all the estimated models, we will compare them to evaluate the linear regression model versus the logistic regression model versus the Decision tree model. For our case, linear regression models turned out to be almost the best. Then, we also compare RMSE values across products, and send back some product models back for refinement. For another example of obtaining RMSE, please go to http://www.cakesolutions.net/teamblogs/spark-mllib-linear-regression-example-and-vocabulary. ROC curves As an example, we calculate ROC curves to assess our logistic models. In MLlib, we can use the MLlib function of metrics.areaUnderROC() to calculate ROC once we apply our estimated model to our test data and get labels for testing cases. For more on using MLlib to obtain ROC, please go to: http://web.cs.ucla.edu/~mtgarip/linear.html In R, using package pROC, we can perform the following to calculate and plot ROC curves: mylogit <- glm(s2 ~ ., family = "binomial") summary(mylogit) prob=predict(mylogit,type=c("response")) testdata1$prob=prob library(pROC) g <- roc(s2 ~ prob, data = testdata1) plot(g) As discussed, once ROC curves get calculated, we can use them to compare our logistic models against Decision tree models, or compare models cross products. In our case, logistic models perform better than Decision tree models. Results explanation Once we pass our model evaluation and decide to select the estimated model as our final model, we need to interpret results to the company executives and also their technicians. Next, we discuss some commonly-used ways of interpreting our results, one using tables and another using graphs, with our focus on impacts assessments. Some users may prefer to interpret our results in terms of ROIs, for which cost and benefits data is needed. Once we have the cost and benefit data, our results here can be easily expanded to cover the ROI issues. Also, some optimization may need to be applied for real decision making. Impacts assessments As discussed in Section 1, the main purpose of this project is to gain a holistic view of the sales team success. For example, the company wishes to understand the impact of marketing on sales success in comparison to training and other factors. As we have our linear regression model estimated, one easy way of comparing impacts is to summarize the variance explained by each feature group, as shown by the following table. Tables for Impact Assessment: Feature Group % Team 8.5 Marketing 7.6 Training 5.7 Staffing 12.9 Product 8.9 Promotion 14.6 Total 58.2 The following figure is another example of using graphs to display the results that were discussed. Summary In this article, we went through a step-by-step process from data to a holistic view of businesses. From this, we processed a large amount of data on Spark and then built a model to produce a holistic view of the sales team success for the company IFS. Specifically, we first selected models per business needs after we prepared Spark computing and loaded in preprocessed data. Secondly, we estimated model coefficients. Third, we evaluated the estimated models. Then, we finally interpreted analytical results. This process is similar to the process of working with small data. But in dealing with big data, we will need parallel computing, for which Apache Spark is utilized. During this process, Apache Spark makes things easy and fast. After this article, readers will have gained a full understanding about how Apache Spark can be utilized to make our work easier and faster in obtaining a holistic view of businesses. At the same time, readers should become familiar with the RM4Es modeling processes of processing large amount of data and developing predictive models, and they should especially become capable of producing their own holistic view of businesses. Resources for Article: Further resources on this subject: Getting Started with Apache Hadoop and Apache Spark [article] Getting Started with Apache Spark DataFrames [article] Sabermetrics with Apache Spark [article]
Read more
  • 0
  • 0
  • 2732
article-image-splunks-input-methods-and-data-feeds
Packt
30 May 2016
13 min read
Save for later

Splunk's Input Methods and Data Feeds

Packt
30 May 2016
13 min read
This article being crafted by Ashish Kumar Yadav has been picked from Advanced Splunk book. This book helps you to get in touch with a great data science tool named Splunk. The big data world is an ever expanding forte and it is easy to get lost in the enormousness of machine data available at your bay. The Advanced Splunk book will definitely provide you with the necessary resources and the trail to get you at the other end of the machine data. While the book emphasizes on Splunk, it also discusses its close association with Python language and tools like R and Tableau that are needed for better analytics and visualization purpose. (For more resources related to this topic, see here.) Splunk supports numerous ways to ingest data on its server. Any data generated from a human-readable machine from various sources can be uploaded using data input methods such as files, directories, TCP/UDP scripts can be indexed on the Splunk Enterprise server and analytics and insights can be derived from them. Data sources Uploading data on Splunk is one of the most important parts of analytics and visualizations of data. If data is not properly parsed, timestamped, or broken into events, then it can be difficult to analyze and get proper insight on the data. Splunk can be used to analyze and visualize data ranging from various domains, such as IT security, networking, mobile devices, telecom infrastructure, media and entertainment devices, storage devices, and many more. The machine generated data from different sources can be of different formats and types, and hence, it is very important to parse data in the best format to get the required insight from it. Splunk supports machine-generated data of various types and structures, and the following screenshot shows the common types of data that comes with an inbuilt support in Splunk Enterprise. The most important point of these sources is that if the data source is from the following list, then the preconfigured settings and configurations already stored in Splunk Enterprise are applied. This helps in getting the data parsed in the best and most suitable formats of events and timestamps to enable faster searching, analytics, and better visualization. The following screenshot enlists common data sources supported by Splunk Enterprise: Structured data Machine-generated data is generally structured, and in some cases, it can be semistructured. Some of the types of structured data are EXtensible Markup Language (XML), JavaScript Object Notation (JSON), comma-separated values (CSV), tab-separated values (TSV), and pipe-separated values (PSV). Any format of structured data can be uploaded on Splunk. However, if the data is from any of the preceding formats, then predefined settings and configuration can be applied directly by choosing the respective source type while uploading the data or by configuring it in the inputs.conf file. The preconfigured settings for any of the preceding structured data is very generic. Many times, it happens that the machine logs are customized structured logs; in that case, additional settings will be required to parse the data. For example, there are various types of XML. We have listed two types here. In the first type, there is the <note> tag at the start and </note> at the end, and in between, there are parameters are their values. In the second type, there are two levels of hierarchies. XML has the <library> tag along with the <book> tag. Between the <book> and </book> tags, we have parameters and their values. The first type is as follows: <note> <to>Jack</to> <from>Micheal</from> <heading>Test XML Format</heading> <body>This is one of the format of XML!</body> </note> The second type is shown in the following code snippet: <Library> <book category="Technical"> <title lang="en">Splunk Basic</title> <author>Jack Thomas</author> <year>2007</year> <price>520.00</price> </book> <book category="Story"> <title lang="en">Jungle Book</title> <author>Rudyard Kiplin</author> <year>1984</year> <price>50.50</price> </book> </Library > Similarly, there can be many types of customized XML scripts generated by machines. To parse different types of structured data, Splunk Enterprise comes with inbuilt settings and configuration defined for the source it comes from. Let's say, for example, that the data received from a web server's logs are also structured logs and it can be in either a JSON, CSV, or simple text format. So, depending on the specific sources, Splunk tries to make the job of the user easier by providing the best settings and configuration for many common sources of data. Some of the most common sources of data are data from web servers, databases, operation systems, network security, and various other applications and services. Web and cloud services The most commonly used web servers are Apache and Microsoft IIS. All Linux-based web services are hosted on Apache servers, and all Windows-based web services on IIS. The logs generated from Linux web servers are simple plain text files, whereas the log files of Microsoft IIS can be in a W3C-extended log file format or it can be stored in a database in the ODBC log file format as well. Cloud services such as Amazon AWS, S3, and Microsoft Azure can be directly connected and configured according to the forwarded data on Splunk Enterprise. The Splunk app store has many technology add-ons that can be used to create data inputs to send data from cloud services to Splunk Enterprise. So, when uploading log files from web services, such as Apache, Splunk provides a preconfigured source type that parses data in the best format for it to be available for visualization. Suppose that the user wants to upload apache error logs on the Splunk server, and then the user chooses apache_error from the Web category of Source type, as shown in the following screenshot: On choosing this option, the following set of configuration is applied on the data to be uploaded: The event break is configured to be on the regular expression pattern ^[ The events in the log files will be broken into a single event on occurrence of [ at every start of a line (^) The timestamp is to be identified in the [%A %B %d %T %Y] format, where: %A is the day of week; for example, Monday %B is the month; for example, January %d is the day of the month; for example, 1 %T is the time that has to be in the %H : %M : %S format %Y is the year; for example, 2016 Various other settings such as maxDist that allows the amount of variance of logs can vary from the one specified in the source type and other settings such as category, descriptions, and others. Any new settings required as per our needs can be added using the New Settings option available in the section below Settings. After making the changes, either the settings can be saved as a new source type or the existing source type can be updated with the new settings. IT operations and network security Splunk Enterprise has many applications on the Splunk app store that specifically target IT operations and network security. Splunk is a widely accepted tool for intrusion detection, network and information security, fraud and theft detection, and user behaviour analytics and compliance. A Splunk Enterprise application provides inbuilt support for the Cisco Adaptive Security Appliance (ASA) firewall, Cisco SYSLOG, Call Detail Records (CDR) logs, and one of the most popular intrusion detection application, Snort. The Splunk app store has many technology add-ons to get data from various security devices such as firewall, routers, DMZ, and others. The app store also has the Splunk application that shows graphical insights and analytics over the data uploaded from various IT and security devices. Databases The Splunk Enterprise application has inbuilt support for databases such as MySQL, Oracle Syslog, and IBM DB2. Apart from this, there are technology add-ons on the Splunk app store to fetch data from the Oracle database and the MySQL database. These technology add-ons can be used to fetch, parse, and upload data from the respective database to the Splunk Enterprise server. There can be various types of data available from one source; let's take MySQL as an example. There can be error log data, query logging data, MySQL server health and status log data, or MySQL data stored in the form of databases and tables. This concludes that there can be a huge variety of data generated from the same source. Hence, Splunk provides support for all types of data generated from a source. We have inbuilt configuration for MySQL error logs, MySQL slow queries, and MySQL database logs that have been already defined for easier input configuration of data generated from respective sources. Application and operating system data The Splunk input source type has inbuilt configuration available for Linux dmesg, syslog, security logs, and various other logs available from the Linux operating system. Apart from the Linux OS, Splunk also provides configuration settings for data input of logs from Windows and iOS systems. It also provides default settings for Log4j-based logging for Java, PHP, and .NET enterprise applications. Splunk also supports lots of other applications' data such as Ruby on Rails, Catalina, WebSphere, and others. Splunk Enterprise provides predefined configuration for various applications, databases, OSes, and cloud and virtual environments to enrich the respective data with better parsing and breaking into events, thus deriving at better insight from the available data. The applications' source whose settings are not available in Splunk Enterprise can alternatively have apps or add-ons on the app store. Data input methods Splunk Enterprise supports data input through numerous methods. Data can be sent on Splunk via files and directories, TCP, UDP, scripts or using universal forwarders. Files and directories Splunk Enterprise provides an easy interface to the uploaded data via files and directories. Files can be directly uploaded from the Splunk web interface manually or it can be configured to monitor the file for changes in content, and the new data will be uploaded on Splunk whenever it is written in the file. Splunk can also be configured to upload multiple files by either uploading all the files in one shot or the directory can be monitored for any new files, and the data will get indexed on Splunk whenever it arrives in the directory. Any data format from any sources that are in a human-readable format, that is, no propriety tools are needed to read the data, can be uploaded on Splunk. Splunk Enterprise even supports uploading in a compressed file format such as (.zip and .tar.gz), which has multiple log files in a compressed format. Network sources Splunk supports both TCP and UDP to get data on Splunk from network sources. It can monitor any network port for incoming data and then can index it on Splunk. Generally, in case of data from network sources, it is recommended that you use a Universal forwarder to send data on Splunk, as Universal forwarder buffers the data in case of any issues on the Splunk server to avoid data loss. Windows data Splunk Enterprise provides direct configuration to access data from a Windows system. It supports both local as well as remote collections of various types and sources from a Windows system. Splunk has predefined input methods and settings to parse event log, performance monitoring report, registry information, hosts, networks and print monitoring of a local as well as remote Windows system. So, data from different sources of different formats can be sent to Splunk using various input methods as per the requirement and suitability of the data and source. New data inputs can also be created using Splunk apps or technology add-ons available on the Splunk app store. Adding data to Splunk—new interfaces Splunk Enterprises introduced new interfaces to accept data that is compatible with constrained resources and lightweight devices for Internet of Things. Splunk Enterprise version 6.3 supports HTTP Event Collector and REST and JSON APIs for data collection on Splunk. HTTP Event Collector is a very useful interface that can be used to send data without using any forwarder from your existing application to the Splunk Enterprise server. HTTP APIs are available in .NET, Java, Python, and almost all the programming languages. So, forwarding data from your existing application that is based on a specific programming language becomes a cake walk. Let's take an example, say, you are a developer of an Android application, and you want to know what all features the user uses that are the pain areas or problem-causing screens. You also want to know the usage pattern of your application. So, in the code of your Android application, you can use REST APIs to forward the logging data on the Splunk Enterprise server. The only important point to note here is that the data needs to be sent in a JSON payload envelope. The advantage of using HTTP Event Collector is that without using any third-party tools or any configuration, the data can be sent on Splunk and we can easily derive insights, analytics, and visualizations from it. HTTP Event Collector and configuration HTTP Event Collector can be used when you configure it from the Splunk Web console, and the event data from HTTP can be indexed in Splunk using the REST API. HTTP Event Collector HTTP Event Collector (EC) provides an API with an endpoint that can be used to send log data from applications into Splunk Enterprise. Splunk HTTP Event Collector supports both HTTP and HTTPS for secure connections. The following are the features of HTTP Event Collector, which make's adding data on Splunk Enterprise easier: It is very lightweight is terms of memory and resource usage, and thus can be used in resources constrained to lightweight devices as well. Events can be sent directly from anywhere such as web servers, mobile devices, and IoT without any need of configuration or installation of forwarders. It is a token-based JSON API that doesn't require you to save user credentials in the code or in the application settings. The authentication is handled by tokens used in the API. It is easy to configure EC from the Splunk Web console, enable HTTP EC, and define the token. After this, you are ready to accept data on Splunk Enterprise. It supports both HTTP and HTTPS, and hence it is very secure. It supports GZIP compression and batch processing. HTTP EC is highly scalable as it can be used in a distributed environment as well as with a load balancer to crunch and index millions of events per second. Summary In this article, we walked through various data input methods along with various data sources supported by Splunk. We also looked at HTTP Event Collector, which is a new feature added in Splunk 6.3 for data collection via REST to encourage the usage of Splunk for IoT. The data sources and input methods for Splunk are unlike any generic tool and the HTTP Event Collector is the added advantage compare to other data analytics tools. Resources for Article: Further resources on this subject: The Splunk Interface [article] The Splunk Web Framework [article] Introducing Splunk [article]
Read more
  • 0
  • 0
  • 16491

article-image-running-your-spark-job-executors-docker-containers
Bernardo Gomez
27 May 2016
12 min read
Save for later

Running Your Spark Job Executors In Docker Containers

Bernardo Gomez
27 May 2016
12 min read
The following post showcases a Dockerized Apache Spark application running in a Mesos cluster. In our example, the Spark Driver as well as the Spark Executors will be running in a Docker image based on Ubuntu with the addition of the SciPy Python packages. If you are already familiar with the reasons for using Docker as well as Apache Mesos, feel free to skip the next section and jump right to the post, but if not, please carry on. Rational Today, it’s pretty common to find engineers and data scientist who need to run big data workloads in a shared infrastructure. In addition, the infrastructure can potentially be used not only for such workloads, but also for other important services required for business operations. A very common way to solve such problems is to virtualize the infrastructure and statically partition it in such a way that each development or business group in the company has its own resources to deploy and run their applications on. Hopefully, the maintainers of such infrastructure and services have a DevOps mentality and have automated, and continuously work on automating, the configuration and software provisioning tasks on such infrastructure. The problem is, as Benjamin Hindman backed by the studies done at the University of California, Berkeley, points out, static partitioning can be highly inefficient on the utilization of such infrastructure. This has prompted the development of Resource Schedulers that abstract CPU, memory, storage, and other computer resources away from machines, either physically or virtually, to enable the execution of applications across the infrastructure to achieve a higher utilization factor, among other things. The concept of sharing infrastructure resources is not new for applications that entail the analysis of large datasets, in most cases, through algorithms that favor parallelization of workloads. Today, the most common frameworks to develop such applications are Hadoop Map Reduce and Apache Spark. In the case of Apache Spark, it can be deployed in clusters managed by Resource Schedulers such as Hadoop YARN or Apache Mesos. Now, since different applications are running inside a shared infrastructure, it’s common to find applications that have different sets of requirements across the software packages and versions of such packages they depend on to function. As an operation engineer, or infrastructure manager, you can force your users to a predefine a set of software libraries, along with their versions, that the infrastructure supports. Hopefully, if you follow that path you also establish a procedure to upgrade such software libraries and add new ones. This tends to require an investment in time and might be frustrating to engineers and data scientist that are constantly installing new packages and libraries to facilitate their work. When you decide to upgrade, you might as well have to refactor some applications that might have been running for a long time but have heavy dependencies on previous versions of the packages that are part of the upgrade. All in all, it’s not simple. Linux Containers, and specially Docker, offer an abstraction such that software can be packaged into lightweight images that can be executed as containers. The containers are executed with some level of isolation, and such isolation is mainly provided by cgroups. Each image can define the type of operating system that it requires along with the software packages. This provides a fantastic mechanism to pass the burden of maintaining the software packages and libraries out of infrastructure management and operations to the owners of the applications. With this, the infrastructure and operations teams can run multiple isolated applications that can potentially have conflicting software libraries within the same infrastructure. Apache Spark can leverage this as long as it’s deployed with an Apache Mesos cluster that supports Docker. In the next sections, we will review how we can run Apache Spark applications within Docker containers. Tutorial For this post, we will use a CentOS 7.2 minimal image running on VirtualBox. However, in this tutorial, we will not include the instructions to obtain such a CentOS image, make it available in VirtualBox, or configure its network interfaces. Additionally, we will be using a single node to keep this exercise as simple as possible. We can later explore deploying a similar setup in a set of nodes in the cloud; but for the sake of simplicity and time, our single node will be running the following services: A Mesos master A Mesos slave A Zookeeper instance A Docker daemon Step 1: The Mesos Cluster To install Apache Mesos in your cluster, I suggest you follow the Mesosphere getting started guidelines. Since we are using CentOS 7.2, we first installed the Mesosphere YUM repository as follows: # Add the repository sudo rpm -Uvh http://repos.mesosphere.com/el/7/noarch/RPMS/ mesosphere-el-repo-7-1.noarch.rpm We then install Apache Mesos and the Apache Zookeeper packages. sudo yum -y install mesos mesosphere-zookeeper Once the packages are installed, we need to configure Zookeeper as well as the Mesos master and slave. Zookeeper For Zookeeper, we need to create a Zookeeper Node Identity. We do this by setting the numerical identifying inside the /var/lib/zookeeper/myid file. echo "1" > /var/lib/zookeeper/myid Since by default Zookeeper binds to all interfaces and exposes its services through port 2181, we do not need to change the /etc/zookeeper/conf/zoo.cfg file. Refer to the Mesosphere getting started guidelines if you have a Zookeeper ensemble, more than one node running Zookeeper. After that, we can start the Zookeeper service: sudo service zookeeper restart Mesos master and slave Before we start to describe the Mesos configuration, we must note that the location of the Mesos configuration files that we will talk about now is specific to Mesosphere's Mesos package. If you don't have a strong reason to build your own Mesos packages, I suggest you use the ones that Mesosphere kindly provides. We need to tell the Mesos master and slave about the connection string they can use to reach Zookeeper, including their namespace. By default, Zookeeper will bind to all interfaces; you might want to change this behavior. In our case, we will make sure that the IP address that we want to use to connect to Zookeeper can be resolved within the containers. The nodes public interface IP 192.168.99.100, and to do this, we do the following: echo "zk://192.168.99.100:2181/mesos" > /etc/mesos/zk Now, since in our setup we have several network interfaces associated with the node that will be running the Mesos master, we will pick an interface that will be reachable within the Docker containers that will eventually be running the Spark Driver and Spark Executors. Knowing that the IP address that we want to bind to is 192.168.99.100, we do the following: echo "192.168.99.100" > /etc/mesos-master/ip We do a similar thing for the Mesos slave. Again, consider that in our example the Mesos slave is running on the same node as the Mesos master and we will bind it to the same network interface. echo "192.168.99.100" > /etc/mesos-slave/ip echo "192.168.99.100" > /etc/mesos-slave/hostname The IP defines the IP address that the Mesos slave will bind to and the hostname defines the hostname that the slave will use to report its availability, and therefore, it is the value that the Mesos frameworks, in our case Apache Spark, will use to connect to it. Let’s start the services: systemctl start mesos-master systemctl start mesos-slave By default, the Mesos master will bind to port 5050 and the Mesos slave to port 5051. Let’s confirm this, assuming that you have installed the net-utils package: netstat -pleno | grep -E "5050|5051" tcp 0 0 192.168.99.100:5050 0.0.0.0:* LISTEN 0 127336 22205/mesos-master off (0.00/0/0) tcp 0 0 192.168.99.100:5051 0.0.0.0:* LISTEN 0 127453 22242/mesos-slave off (0.00/0/0) Let’s run a test: MASTER=$(mesos-resolve cat /etc/mesos/zk) LIBPROCESSIP=192.168.99.100 mesos-execute --master=$MASTER --name="cluster-test" --command="echo 'Hello World' && sleep 5 && echo 'Good Bye'" Step 2: Installing Docker We followed the Docker documentation on installing Docker in CentOS. I suggest that you do the same. In a nutshell, we executed the following: sudo yum update sudo tee /etc/yum.repos.d/docker.repo <<-'EOF' [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF sudo yum install docker-engine sudo service docker start If the preceding code succeeded, you should be able to do a docker ps as well as a docker search ipython/scipystack successfully. Step 3: Creating a Spark image Let’s create the Dockerfile that will be used by the Spark Driver and Spark Executor. For our example, we will consider that the Docker image should provide the SciPy stack along with additional Python libraries. So, in a nutshell, the Docker image must have the following features: The version of libmesos should be compatible with the version of the Mesos master and slave. For example, /usr/lib/libmesos-0.26.0.so It should have a valid JDK It should have the SciPy stack as well as Python packages that we want It should have a version of Spark, we will choose 1.6.0. The Dockerfile below will provide the requirements that we mention above. Note that installing Mesos through the Mesosphere RPMs will install Open JDK, in this case version 1.7. Dockerfile: # Version 0.1 FROM ipython/scipystack MAINTAINER Bernardo Gomez Palacio "bernardo.gomezpalacio@gmail.com" ENV REFRESHEDAT 2015-03-19 ENV DEBIANFRONTEND noninteractive RUN apt-get update RUN apt-get dist-upgrade -y # Setup RUN sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF RUN export OSDISTRO=$(lsbrelease -is | tr '[:upper:]' '[:lower:]') && export OSCODENAME=$(lsbrelease -cs) && echo "deb http://repos.mesosphere.io/${OSDISTRO} ${OSCODENAME} main" | tee /etc/apt/sources.list.d/mesosphere.list && apt-get -y update RUN apt-get -y install mesos RUN apt-get install -y python libnss3 curl RUN curl http://d3kbcqa49mib13.cloudfront.net/spark-1.6.0-bin-hadoop2.6.tgz | tar -xzC /opt && mv /opt/spark* /opt/spark RUN apt-get clean # Fix pypspark six error. RUN pip2 install -U six RUN pip2 install msgpack-python RUN pip2 install avro COPY spark-conf/* /opt/spark/conf/ COPY scripts /scripts ENV SPARKHOME /opt/spark ENTRYPOINT ["/scripts/run.sh"] Let’s explain some very important files that will be available in the Docker image according to the Dockerfile mentioned earlier: The spark-conf/spark-env.sh, as mentioned in the Spark docs, will be used to set the location of the Mesos libmesos.so: export MESOSNATIVEJAVALIBRARY=${MESOSNATIVEJAVALIBRARY:-/usr/lib/libmesos.so}export SPARKLOCALIP=${SPARKLOCALIP:-"127.0.0.1"}export SPARKPUBLICDNS=${SPARKPUBLICDNS:-"127.0.0.1"} The spark-conf/spark-defaults.conf serves as the definition of the default configuration for our Spark jobs within the container, the contents are as follows: spark.master SPARKMASTER spark.mesos.mesosExecutor.cores MESOSEXECUTORCORE spark.mesos.executor.docker.image SPARKIMAGE spark.mesos.executor.home /opt/spark spark.driver.host CURRENTIP spark.executor.extraClassPath /opt/spark/custom/lib/* spark.driver.extraClassPath /opt/spark/custom/lib/* Note that the use of environment variables such as SPARKMASTER and SPARKIMAGE are critical since this will allow us to customize how the Spark application interacts with the Mesos Docker integration. We have Docker's entry point script. The script, showcased below, will populate the spark-defaults.conf file. Now, let’s define the Dockerfile entry point such that it lets us define some basic options that will get passed to the Spark command, for example, spark-shell, spark-submit or pyspark: #!/bin/bash SPARKMASTER=${SPARKMASTER:-local} MESOSEXECUTORCORE=${MESOSEXECUTORCORE:-0.1} SPARKIMAGE=${SPARKIMAGE:-sparkmesos:lastet} CURRENTIP=$(hostname -i) sed -i 's;SPARKMASTER;'$SPARKMASTER';g' /opt/spark/conf/spark-defaults.conf sed -i 's;MESOSEXECUTORCORE;'$MESOSEXECUTORCORE';g' /opt/spark/conf/spark-defaults.conf sed -i 's;SPARKIMAGE;'$SPARKIMAGE';g' /opt/spark/conf/spark-defaults.conf sed -i 's;CURRENTIP;'$CURRENTIP';g' /opt/spark/conf/spark-defaults.conf export SPARKLOCALIP=${SPARKLOCALIP:-${CURRENTIP:-"127.0.0.1"}} export SPARKPUBLICDNS=${SPARKPUBLICDNS:-${CURRENTIP:-"127.0.0.1"}} if [ $ADDITIONALVOLUMES ]; then echo "spark.mesos.executor.docker.volumes: $ADDITIONALVOLUMES" >> /opt/spark/conf/spark-defaults.conf fi exec "$@" Let’s build the image so we can start using it. docker build -t sparkmesos . && docker tag -f sparkmesos:latest sparkmesos:latest Step 4: Running a Spark application with Docker. Now that the image is built, we just need to run it. We will call the PySpark application: docker run -it --rm -e SPARKMASTER="mesos://zk://192.168.99.100:2181/mesos" -e SPARKIMAGE="sparkmesos:latest" -e PYSPARKDRIVERPYTHON=ipython2 sparkmesos:latest /opt/spark/bin/pyspark To make sure that SciPy is working, let's write the following to the PySpark shell: from scipy import special, optimize import numpy as np f = lambda x: -special.jv(3, x) sol = optimize.minimize(f, 1.0) x = np.linspace(0, 10, 5000) x Now, let’s try to calculate PI as an example: docker run -it --rm -e SPARKMASTER="mesos://zk://192.168.99.100:2181/mesos" -e SPARKIMAGE="sparkmesos:latest" -e PYSPARKDRIVERPYTHON=ipython2 sparkmesos:latest /opt/spark/bin/spark-submit --driver-memory 500M --executor-memory 500M /opt/spark/examples/src/main/python/pi.py 10 Conclusion and further notes Although we were able to run a Spark application within a Docker container leveraging Apache Mesos, there is more work to do. We need to explore containerized Spark applications that spread across multiple nodes along with providing a mechanism that enables network port mapping. References Apache Mesos, The Apache Software Foundation, 2015. Web. 27 Jan. 2016. Apache Spark, The Apache Software Foundation, 2015. Web. 27 Jan. 2016. Benjamin Hindman, "Apache Mesos NYC Meetup", August 20, 2013. Web. 27 Jan 2016. Docker, Docker Inc, 2015. Web. 27 Jan 2016. () Hindman, Konwinski, Zaharia, Ghodsi, D. Joseph, Katz, Shenker, Stoica. "Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center" Web. 27 Jan 2016. Mesosphere Inc, 2015. Web. 27 Jan 2016. SciPy, SciPy developers, 2015. Web. 28 Jan 2016. Virtual Box, Oracle Inc, 2015. Web 28 Jan 2016. Wang Qiang, "Docker Spark Mesos". Web 28 Jan 2016. About the Author Bernardo Gomez Palacio is a consulting member of technical staff, Big Data Cloud Services at Oracle Cloud. He is an electronic systems engineer but has worked for more than 12 years developing software and more than 6 years on DevOps. Currently, his work is that of developing infrastructure to aid the creation and deployment of big data applications. He is a supporter of open source software and has a particular interest in Apache Mesos, Apache Spark, Distributed File Systems, and Docker Containerization & Networking. His opinions are his own and do not reflect the opinions of his employer. 
Read more
  • 0
  • 0
  • 13661

article-image-wrappers
Packt
27 May 2016
13 min read
Save for later

Wrappers

Packt
27 May 2016
13 min read
In this article by Erik Westra, author of the book Modular Programming with Python, we learn the concepts of wrappers. A wrapper is essentially a group of functions that call other functions to do the work. Wrappers are used to simplify an interface, to make a confusing or badly designed API easier to use, to convert data formats into something more convenient, and to implement cross-language compatibility. Wrappers are also sometimes used to add testing and error-checking code to an existing API. Let's take a look at a real-world application of a wrapper module. Imagine that you work for a large bank and have been asked to write a program to analyze fund transfers to help identify possible fraud. Your program receives information, in real time, about every inter-bank funds transfer that takes place. For each transfer, you are given: The amount of the transfer The ID of the branch in which the transfer took place The identification code for the bank the funds are being sent to Your task is to analyze the transfers over time to identify unusual patterns of activity. To do this, you need to calculate, for each of the last eight days, the total value of all transfers for each branch and destination bank. You can then compare the current day's totals against the average for the previous seven days, and flag any daily totals that are more than 50% above the average. You start by deciding how to represent the total transfers for a day. Because you need to keep track of this for each branch and destination bank, it makes sense to store these totals in a two-dimensional array: In Python, this type of two-dimensional array is represented as a list of lists: totals = [[0, 307512, 1612, 0, 43902, 5602918], [79400, 3416710, 75, 23508, 60912, 5806], ... ] You can then keep a separate list of the branch ID for each row and another list holding the destination bank code for each column: branch_ids = [125000249, 125000252, 125000371, ...] bank_codes = ["AMERUS33", "CERYUS33", "EQTYUS44", ...] Using these lists, you can calculate the totals for a given day by processing the transfers that took place on that particular day: totals = [] for branch in branch_ids: branch_totals = [] for bank in bank_codes: branch_totals.append(0) totals.append(branch_totals) for transfer in transfers_for_day: branch_index = branch_ids.index(transfer['branch']) bank_index = bank_codes.index(transfer['dest_bank']) totals[branch_index][bank_index] += transfer['amount'] So far so good. Once you have these totals for each day, you can then calculate the average and compare it against the current day's totals to identify the entries that are higher than 150% of the average. Let's imagine that you've written this program and managed to get it working. When you start using it, though, you immediately discover a problem: your bank has over 5,000 branches, and there are more than 15,000 banks worldwide that your bank can transfer funds to—that's a total of 75 million combinations that you need to keep totals for, and as a result, your program is taking far too long to calculate the totals. To make your program faster, you need to find a better way of handling large arrays of numbers. Fortunately, there's a library designed to do just this: NumPy. NumPy is an excellent array-handling library. You can create huge arrays and perform sophisticated operations on an array with a single function call. Unfortunately, NumPy is also a dense and impenetrable library. It was designed and written for people with a deep understanding of mathematics. While there are many tutorials available and you can generally figure out how to use it, the code that uses NumPy is often hard to comprehend. For example, to calculate the average across multiple matrices would involve the following: daily_totals = [] for totals in totals_to_average: daily_totals.append(totals) average = numpy.mean(numpy.array(daily_totals), axis=0) Figuring out what that last line does would require a trip to the NumPy documentation. Because of the complexity of the code that uses NumPy, this is a perfect example of a situation where a wrapper module can be used: the wrapper module can provide an easier-to-use interface to NumPy, so your code can use it without being cluttered with complex and confusing function calls. To work through this example, we'll start by installing the NumPy library. NumPy (http://www.numpy.org) runs on Mac OS X, Windows, and Linux machines. How you install it depends on which operating system you are using: For Mac OS X, you can download an installer from http://www.kyngchaos.com/software/python. For MS Windows, you can download a Python "wheel" file for NumPy from http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy. Choose the pre-built version of NumPy that matches your operating system and the desired version of Python. To use the wheel file, use the pip install command, for example, pip install numpy-1.10.4+mkl-cp34-none-win32.whl. For more information about installing Python wheels, refer to https://pip.pypa.io/en/latest/user_guide/#installing-from-wheels. If your computer runs Linux, you can use your Linux package manager to install NumPy. Alternatively, you can download and build NumPy in source code form. To ensure that NumPy is working, fire up your Python interpreter and enter the following: import numpy a = numpy.array([[1, 2], [3, 4]]) print(a) All going well, you should see a 2 x 2 matrix displayed: [[1 2] [3 4]] Now that we have NumPy installed, let's start working on our wrapper module. Create a new Python source file, named numpy_wrapper.py, and enter the following into this file: import numpy That's all for now; we'll add functions to this wrapper module as we need them. Next, create another Python source file, named detect_unusual_transfers.py, and enter the following into this file: import random import numpy_wrapper as npw BANK_CODES = ["AMERUS33", "CERYUS33", "EQTYUS44", "LOYDUS33", "SYNEUS44", "WFBIUS6S"] BRANCH_IDS = ["125000249", "125000252", "125000371", "125000402", "125000596", "125001067"] As you can see, we are hardwiring the bank and branch codes for our example; in a real program, these values would be loaded from somewhere, such as a file or a database. Since we don't have any available data, we will use the random module to create some. We are also changing the name of the numpy_wrapper module to make it easier to access from our code. Let's now create some funds transfer data to process, using the random module: days = [1, 2, 3, 4, 5, 6, 7, 8] transfers = [] for i in range(10000): day = random.choice(days) bank_code = random.choice(BANK_CODES) branch_id = random.choice(BRANCH_IDS) amount = random.randint(1000, 1000000) transfers.append((day, bank_code, branch_id, amount)) Here, we randomly select a day, a bank code, a branch ID, and an amount, storing these values in the transfers list. Our next task is to collate this information into a series of arrays. This allows us to calculate the total value of the transfers for each day, grouped by the branch ID and destination bank. To do this, we'll create a NumPy array for each day, where the rows in each array represent branches and the columns represent destination banks. We'll then go through the list of transfers, processing them one by one. The following illustration summarizes how we process each transfer in turn: First, we select the array for the day on which the transfer occurred, and then we select the appropriate row and column based on the destination bank and the branch ID. Finally, we add the amount of the transfer to that item within the day's array. Let's implement this logic. Our first task is to create a series of NumPy arrays, one for each day. Here, we immediately hit a snag: NumPy has many different options for creating arrays; in this case, we want to create an array that holds integer values and has its contents initialized to zero. If we used NumPy directly, our code would look like the following: array = numpy.zeros((num_rows, num_cols), dtype=numpy.int32) This is not exactly easy to understand, so we're going to move this logic into our NumPy wrapper module. Edit the numpy_wrapper.py file, and add the following to the end of this module: def new(num_rows, num_cols): return numpy.zeros((num_rows, num_cols), dtype=numpy.int32) Now, we can create a new array by calling our wrapper function (npw.new()) and not have to worry about the details of how NumPy works at all. We have simplified the interface to this particular aspect of NumPy: Let's now use our wrapper function to create the eight arrays that we will need, one for each day. Add the following to the end of the detect_unusual_transfers.py file: transfers_by_day = {} for day in days: transfers_by_day[day] = npw.new(num_rows=len(BANK_CODES), num_cols=len(BRANCH_IDS)) Now that we have our NumPy arrays, we can use them as if they were nested Python lists. For example: array[row][col] = array[row][col] + amount We just need to choose the appropriate array, and calculate the row and column numbers to use. Here is the necessary code, which you should add to the end of your detect_unusual_transfers.py script: for day,bank_code,branch_id,amount in transfers: array = transfers_by_day[day] row = BRANCH_IDS.index(branch_id) col = BANK_CODES.index(bank_code) array[row][col] = array[row][col] + amount Now that we've collated the transfers into eight NumPy arrays, we want to use all this data to detect any unusual activity. For each combination of branch ID and destination bank code, we will need to do the following: Calculate the average of the first seven days' activity. Multiply the calculated average by 1.5. If the activity on the eighth day is greater than the average multiplied by 1.5, then we consider this activity to be unusual. Of course, we need to do this for every row and column in our arrays, which would be very slow; this is why we're using NumPy. So, we need to calculate the average for multiple arrays of numbers, then multiply the array of averages by 1.5, and finally, compare the values within the multiplied array against the array for the eighth day of data. Fortunately, these are all things that NumPy can do for us. We'll start by collecting together the seven arrays we need to average, as well as the array for the eighth day. To do this, add the following to the end of your program: latest_day = max(days) transfers_to_average = [] for day in days: if day != latest_day: transfers_to_average.append(transfers_by_day[day]) current = transfers_by_day[latest_day] To calculate the average of a list of arrays, NumPy requires us to use the following function call: average = numpy.mean(numpy.array(arrays_to_average), axis=0) Since this is confusing, we will move this function into our wrapper. Add the following code to the end of the numpy_wrapper.py module: def average(arrays_to_average): return numpy.mean(numpy.array(arrays_to_average), axis=0) This lets us calculate the average of the seven day's activity using a single call to our wrapper function. To do this, add the following to the end of your detect_unusual_transfers.py script: average = npw.average(transfers_to_average) As you can see, using the wrapper makes our code much easier to understand. Our next task is to multiply the array of calculated averages by 1.5, and compare the result against the current day's totals. Fortunately, NumPy makes this easy: unusual_transfers = current > average * 1.5 Because this code is so clear, there's no advantage in creating a wrapper function for it. The resulting array, unusual_transfers, will be the same size as our current and average arrays, where each entry in the array is either True or False: We're almost done; our final task is to identify the array entries with a value of True, and tell the user about the unusual activity. While we could scan through every row and column to find the True entries, using NumPy is much faster. The following NumPy code will give us a list containing the row and column numbers for the True entries in the array: indices = numpy.transpose(array.nonzero()) True to form, though, this code is hard to understand, so it's a perfect candidate for another wrapper function. Go back to your numpy_wrapper.py module, and add the following to the end of the file: def get_indices(array): return numpy.transpose(array.nonzero()) This function returns a list (actually an array) of (row,col) values for all the True entries in the array. Back in our detect_unusual_activity.py file, we can use this function to quickly identify the unusual activity: for row,col in npw.get_indices(unusual_transfers): branch_id = BRANCH_IDS[row] bank_code = BANK_CODES[col] average_amt = int(average[row][col]) current_amt = current[row][col] print("Branch {} transferred ${:,d}".format(branch_id, current_amt) + " to bank {}, average = ${:,d}".format(bank_code, average_amt)) As you can see, we use the BRANCH_IDS and BANK_CODES lists to convert from the row and column number back to the relevant branch ID and bank code. We also retrieve the average and current amounts for the suspicious activity. Finally, we print out this information to warn the user about the unusual activity. If you run your program, you should see an output that looks something like this: Branch 125000371 transferred $24,729,847 to bank WFBIUS6S, average = $14,954,617 Branch 125000402 transferred $26,818,710 to bank CERYUS33, average = $16,338,043 Branch 125001067 transferred $27,081,511 to bank EQTYUS44, average = $17,763,644 Because we are using random numbers for our financial data, the output will be random too. Try running the program a few times; you may not get any output at all if none of the randomly-generated values are suspicious. Of course, we are not really interested in detecting suspicious financial activity—this example is just an excuse for working with NumPy. What is far more interesting is the wrapper module that we created, hiding the complexity of the NumPy interface so that the rest of our program can concentrate on the job to be done. If we were to continue developing our unusual activity detector, we would no doubt add more functionality to our numpy_wrapper.py module as we found more NumPy functions that we wanted to wrap. Summary This is just one example of a wrapper module. As we mentioned earlier, simplifying a complex and confusing API is just one use for a wrapper module; they can also be used to convert data from one format to another, add testing and error-checking code to an existing API, and call functions that are written in a different language. Note that, by definition, a wrapper is always thin—while there might be code in a wrapper (for example, to convert a parameter from an object into a dictionary), the wrapper function always ends up calling another function to do the actual work.
Read more
  • 0
  • 0
  • 2422
article-image-security-considerations-multitenant-environment
Packt
24 May 2016
8 min read
Save for later

Security Considerations in Multitenant Environment

Packt
24 May 2016
8 min read
In this article by Zoran Pavlović and Maja Veselica, authors of the book, Oracle Database 12c Security Cookbook, we will be introduced to common privileges and learn how to grant privileges and roles commonly. We'll also study the effects of plugging and unplugging operations on users, roles, and privileges. (For more resources related to this topic, see here.) Granting privileges and roles commonly Common privilege is a privilege that can be exercised across all containers in a container database. Depending only on the way it is granted, a privilege becomes common or local. When you grant privilege commonly (across all containers) it becomes common privilege. Only common users or roles can have common privileges. Only common role can be granted commonly. Getting ready For this recipe, you will need to connect to the root container as an existing common user who is able to grant a specific privilege or existing role (in our case – create session, select any table, c##role1, c##role2) to another existing common user (c##john). If you want to try out examples in the How it works section given ahead, you should open pdb1 and pdb2. You will use: Common users c##maja and c##zoran with dba role granted commonly Common user c##john Common roles c##role1 and c##role2 How to do it... You should connect to the root container as a common user who can grant these privileges and roles (for example, c##maja or system user). SQL> connect c##maja@cdb1 Grant a privilege (for example, create session) to a common user (for example, c##john) commonly c##maja@CDB1> grant create session to c##john container=all; Grant a privilege (for example, select any table) to a common role (for example, c##role1) commonly c##maja@CDB1> grant select any table to c##role1 container=all; Grant a common role (for example, c##role1) to a common role (for example, c##role2) commonly c##maja@CDB1> grant c##role1 to c##role2 container=all; Grant a common role (for example, c##role2) to a common user (for example, c##john) commonly c##maja@CDB1> grant c##role2 to c##john container=all; How it works... Figure 16 You can grant privileges or common roles commonly only to a common user. You need to connect to the root container as a common user who is able to grant a specific privilege or role. In step 2, system privilege, create session, is granted to common user c##john commonly, by adding a container=all clause to the grant statement. This means that user c##john can connect (create session) to root or any pluggable database in this container database (including all pluggable databases that will be plugged-in in the future). N.B. container = all clause is NOT optional, even though you are connected to the root. Unlike during creation of common users and roles (if you omit container=all, user or role will be created in all containers – commonly), If you omit this clause during privilege or role grant, privilege or role will be granted locally and it can be exercised only in root container. SQL> connect c##john/oracle@cdb1 c##john@CDB1> connect c##john/oracle@pdb1 c##john@PDB1> connect c##john/oracle@pdb2 c##john@PDB2> In the step 3, system privilege, select any table, is granted to common role c##role1 commonly. This means that role c##role1 contains select any table privilege in all containers (root and pluggable databases). c##zoran@CDB1> select * from role_sys_privs where role='C##ROLE1'; ROLE PRIVILEGE ADM COM ------------- ----------------- --- --- C##ROLE1 SELECT ANY TABLE NO YES c##zoran@CDB1> connect c##zoran/oracle@pdb1 c##zoran@PDB1> select * from role_sys_privs where role='C##ROLE1'; ROLE PRIVILEGE ADM COM -------------- ------------------ --- --- C##ROLE1 SELECT ANY TABLE NO YES c##zoran@PDB1> connect c##zoran/oracle@pdb2 c##zoran@PDB2> select * from role_sys_privs where role='C##ROLE1'; ROLE PRIVILEGE ADM COM -------------- ---------------- --- --- C##ROLE1 SELECT ANY TABLE NO YES In step 4, common role c##role1, is granted to another common role c##role2 commonly. This means that role c##role2 has granted role c##role1 in all containers. c##zoran@CDB1> select * from role_role_privs where role='C##ROLE2'; ROLE GRANTED ROLE ADM COM --------------- --------------- --- --- C##ROLE2 C##ROLE1 NO YES c##zoran@CDB1> connect c##zoran/oracle@pdb1 c##zoran@PDB1> select * from role_role_privs where role='C##ROLE2'; ROLE GRANTED_ROLE ADM COM ------------- ----------------- --- --- C##ROLE2 C##ROLE1 NO YES c##zoran@PDB1> connect c##zoran/oracle@pdb2 c##zoran@PDB2> select * from role_role_privs where role='C##ROLE2'; ROLE GRANTED_ROLE ADM COM ------------- ------------- --- --- C##ROLE2 C##ROLE1 NO YES In step 5, common role c##role2, is granted to common user c##john commonly. This means that user c##john has c##role2 in all containers. Consequently, user c##john can use select any table privilege in all containers in this container database. c##john@CDB1> select count(*) from c##zoran.t1; COUNT(*) ---------- 4 c##john@CDB1> connect c##john/oracle@pdb1 c##john@PDB1> select count(*) from hr.employees; COUNT(*) ---------- 107 c##john@PDB1> connect c##john/oracle@pdb2 c##john@PDB2> select count(*) from sh.sales; COUNT(*) ---------- 918843 Effects of plugging/unplugging operations on users, roles, and privileges Purpose of this recipe is to show what is going to happen to users, roles, and privileges when you unplug a pluggable database from one container database (cdb1) and plug it into some other container database (cdb2). Getting ready To complete this recipe, you will need: Two container databases (cdb1 and cdb2) One pluggable database (pdb1) in container database cdb1 Local user mike in pluggable database pdb1 with local create session privilege Common user c##john with create session common privilege and create synonym local privilege on pluggable database pdb1 How to do it... Connect to the root container of cdb1 as user sys: SQL> connect sys@cdb1 as sysdba Unplug pdb1 by creating XML metadata file: SQL> alter pluggable database pdb1 unplug into '/u02/oradata/pdb1.xml'; Drop pdb1 and keep datafiles: SQL> drop pluggable database pdb1 keep datafiles; Connect to the root container of cdb2 as user sys: SQL> connect sys@cdb2 as sysdba Create (plug) pdb1 to cdb2 by using previously created metadata file: SQL> create pluggable database pdb1 using '/u02/oradata/pdb1.xml' nocopy; How it works... By completing previous steps, you unplugged pdb1 from cdb1 and plugged it into cdb2. After this operation, all local users and roles (in pdb1) are migrated with pdb1 database. If you try to connect to pdb1 as a local user: SQL> connect mike@pdb1 It will succeed. All local privileges are migrated, even if they are granted to common users/roles. However, if you try to connect to pdb1 as a previously created common user c##john, you'll get an error SQL> connect c##john@pdb1 ERROR: ORA-28000: the account is locked Warning: You are no longer connected to ORACLE. This happened because after migration, common users are migrated in a pluggable database as locked accounts. You can continue to use objects in these users' schemas, or you can create these users in root container of a new CDB. To do this, we first need to close pdb1: sys@CDB2> alter pluggable database pdb1 close; Pluggable database altered. sys@CDB2> create user c##john identified by oracle container=all; User created. sys@CDB2> alter pluggable database pdb1 open; Pluggable database altered. If we try to connect to pdb1 as user c##john, we will get an error: SQL> conn c##john/oracle@pdb1 ERROR: ORA-01045: user C##JOHN lacks CREATE SESSION privilege; logon denied Warning: You are no longer connected to ORACLE. Even though c##john had create session common privilege in cdb1, he cannot connect to the migrated PDB. This is because common privileges are not migrated! So we need to give create session privilege (either common or local) to user c##john. sys@CDB2> grant create session to c##john container=all; Grant succeeded. Let's try granting a create synonym local privilege to the migrated pdb2: c##john@PDB1> create synonym emp for hr.employees; Synonym created. This proves that local privileges are always migrated. Summary In this article, we learned about common privileges and the methods to grant common privileges and roles to users. We also studied what happens to users, roles, and privileges when you unplug a pluggable database from one container database and plug it into some other container database. Resources for Article: Further resources on this subject: Oracle 12c SQL and PL/SQL New Features[article] Oracle GoldenGate 12c — An Overview[article] Backup and Recovery for Oracle SOA Suite 12C[article]
Read more
  • 0
  • 0
  • 1671

article-image-mobile-forensics
Packt
24 May 2016
15 min read
Save for later

Mobile Forensics

Packt
24 May 2016
15 min read
In this article by Soufiane Tahiri, the author of Mastering Mobile Forensics, we will look at the basics of smartphone forensics. Smartphone forensic is a relatively new and quickly emerging field of interest within the digital forensic community and law enforcement, as today's mobile devices are getting smarter, cheaper, and more easily available for common daily use. (For more resources related to this topic, see here.) To investigate the growing number of digital crimes and complaints, researchers have put in a lot of efforts to design the most affordable investigative model; in this article, we will emphasize the importance of paying real attention to the growing market of smartphones and the efforts made in this field from a digital forensic point of view, in order to design the most comprehensive investigation process. Smartphone forensics models Given the pace at which mobile technology grows and the variety of complexities that are produced by today's mobile data, forensics examiners face serious adaptation problems; so, developing and adopting standards makes sense. Reliability of evidence depends directly on adopted investigative processes, choosing to bypass or bypassing a step accidentally may (and will certainly) lead to incomplete evidence and increase the risk of rejection in the court of law. Today, there is no standard or unified model that is adapted to acquiring evidences from smartphones. The dramatic development of smart devices suggests that any forensic examiner will have to apply as many independent models as necessary in order to collect and preserve data. Similar to any forensic investigation, several approaches and techniques can be used to acquire, examine, and analyze data from a mobile device. This section provides a proposed process in which guidelines from different standards and models (SWGDE Best Practices for Mobile Phone Forensics, NIST Guidelines on Mobile Device Forensics, and Developing Process for Mobile Device Forensics by Det. Cynthia A. Murphy) were summarized. The following flowchart schematizes the overall process: Evidence Intake: This triggers the examination process. This step should be documented. Identification: In this, the examiner needs to identify the device's capabilities and specifications. The examiner should document everything that takes place during the whole process of identification. Preparation: In this, the examiner should prepare tools and methods to use and must document them. Securing and preserving evidences: In this, the examiner should protect the evidences and secure the scene, as well as isolate the device from all networks. The examiner needs to be vigilant when documenting the scene. Processing: At this stage, the examiner starts performing the actual (and technical) data acquisition, analysis, and documents the steps, and tools used and all his findings. Verification and validation: The examiner should be sure of the integrity of his findings and he must validate acquired data and evidences in this step. This step should be documented as well. Reporting: The examiner produces a final report in which he documents process and finding. Presentation: This stage is meant to exhibit and present the findings. Archiving: At the end of the forensic process, the examiner should preserve data, report, tools, and all his finding in common formats for an eventual use. Low-level techniques Digital forensic examiners can neither always nor exclusively rely on commercially available tools, handling low-level techniques is a must. This section will also cover the techniques of extracting strings from different object (for example, smartphone images) Any digital examiner should be familiar with concepts and techniques, such as: File carving: This is defined as the process of extracting a collection of data from a larger data set. It is applied to a digital investigation case. File carving is the process of extracting "data" from unallocated filesystem space using file type inner structure and not filesystem structure, meaning that the extraction process is principally based on file types headers and trailers. Extracting metadata: In an ambiguous way metadata is data that describes data or information about information. In general, metadata is hidden and extra information is generated and embedded automatically in a digital file. The definition of metadata differs depending on the context in which it's used and the community that refers to it; metadata can be considered as machine understandable information or record that describes digital records. In fact, metadata can be subdivided into three important types: Descriptive (including elements, such as author, title, abstract, keywords, and so on), Structural (describing how an object is constituted and how the elements are arranged) and Administrative (including elements, such as date and time of creation, data type, and other technical details) String dump and analysis: Most of the digital investigations rely on textual evidences, this is obviously due to the fact that most of the stored digital data is linguistic; for instance, logged conversation, a lot of important text based evidence can be gathered while dumping strings from images (smartphone memory dumps) and can include emails, instant messaging, address books, browsing history, and so on. Most of the currently available digital forensic tools rely on matching and indexing algorithms to search textual evidence at physical level, so that they search every byte to locate specific text strings. Encryption versus encoding versus hashing: The important thing to keep in mind is that encoding, encrypting and hashing are the terms that do not say the same thing at all: Encoding: Is meant for data usability, and it can be reversed using the same algorithm and requires no key Encrypting: Is meant for confidentiality, is reversible and depending on algorithms, it relies on key(s) to encrypt and decrypt. Hashing: Is meant for data integrity and cannot be 'theoretically' reversible and depends on no keys. Decompiling and disassembling: These are types of reverse engineering processes that do the opposite of what a compiler and an assembler do. Decompiler: This translates a compiled binary's low-level code designed to be computer readable into human readable high-level code. The accuracy of decompilers depends on many factors, such as the amount of metadata present in the code being decompiled and the complexity of the code (not in term of algorithms but in term of the high-level code used sophistication). Disassembler: The output of a disassembler is at some level dependent on the processor. It maps processor instructions into mnemonics, which is in contrast to decompiler's output that is far more complicated to understand and edit. iDevices forensics Similar to all Apple operating systems, iOS is derived from Mac OS X; thus, iOS uses Hierarchical File System Plus (HFS+) as its primary file system. HFS+ replaces the first developed filesystem HFS and is considered to be an enhanced version of HFS, but they are still architecturally very similar. The main improvements seen in HFS+ are: A decrease in disk space usage on large volumes (efficient use of disk space) Internationally-friendly file names (by the use of UNICODE instead of MacRoman) Allows future systems to use and extend files/folder's metadata HFS+ divides the total space on a volume (file that contains data and structure to access this data) into allocation blocks and uses 32-bit fields to identify them, meaning that this allows up to 2^32 blocks on a given volume which "simply" means that a volume can hold more files. All HFS+ volumes respect a well-defined structure and each volume contains a volume header, a catalog file, extents overflow file, attributes file, allocation file, and startup file. In addition, all Apple' iDevices have a combined built-in hardware/software advanced security and can be categorized according to Apple's official iOS Security Guide as: System security: Integrated software and hardware platform Encryption and data protection: Mechanisms implemented to protect data from unauthorized use Application security: Application sandboxing Network security: Secure data transmission Apple Pay: Implementation of secure payments Internet services: Apple's network of messaging, synchronizing, and backuping Device controls: Remotely wiping the device if it is lost or stolen Privacy control: Capabilities of control access to geolocation and user data When dealing with seizure, it's important to turn on Airplane mode and if the device is unlocked, set auto-lock to never and check whether passcode was set or not (Settings | Passcode). If you are dealing with a passcode, try to keep the phone charged if you cannot acquire its content immediately; if no passcode was set, turn off the device. There are four different acquisition methods when talking about iDevices: Normal or Direct, this is the most perfect case where you can deal directly with a powered on device; Logical Acquisition, when acquisition is done using iTunes backup or a forensic tool that uses AFC protocol and is in general not complete when emails, geolocation database, apps cache folder, and executables are missed; Advanced Logical Acquisition, a technique introduced by Jonathan Zdziarski (http://www.zdziarski.com/blog/) but no longer possible due to the introduction of iOS 8; and Physical Acquisition that generates a forensic bit-by-bit image of both system and data partitions. Before selecting (or not, because the method to choose depends on some parameters) one method, the examiner should answer three important questions: What is the device model? What is the iOS version installed? Is the device passcode protected? Is it a simple passcode? Is it a complex passcode? Android forensics Android is an open source Linux based operating system, it was first developed by Android Inc. in 2003; then in 2005 it was acquired by Google and was unveiled in 2007. The Android operating system is like most of operating systems; it consists of a stack of software components roughly divided into four main layers and five main sections, as shown on the image from https://upload.wikimedia.org/wikipedia/commons/a/af/Android-System-Architecture.svg) and each layer provides different services to the layer above. Understanding every smartphone's OS security model is a big deal in a forensic context, all vendors and smartphones manufacturers care about securing their user's data and in most of the cases the security model implemented can cause a real headache to every forensic examiner and Android is no exception to the rule. Android, as you know, is an open source OS built on the Linux Kernel and provides an environment offering the ability to run multiple applications simultaneously, each application is digitally signed and isolated in its very own sandbox. Each application sandbox defines the application's privileges. Above the Kernel all activities have constrained access to the system. Android OS implements many security components and has many considerations of its various layers; the following figure summarizes Android security architecture on ARM with TrustZone support: Without any doubt, lock screens represent the very first starting point in every mobile forensic examination. As for all smartphone's OS, Android offers a way to control access to a given device by requiring user authentication. The problem with recent implementations of lock screen in modern operating systems in general, and in Android since it is the point of interest of this section, is that beyond controlling access to the system user interface and applications, the lock screens have now been extended with more "fancy" features (showing widgets, switching users in multi-users devices, and so on) and more forensically challenging features, such as unlocking the system keystore to derive the key-encryption key (used among the disk encryption key) as well as the credential storage encryption key. The problem with bypassing lock screens (also called keyguards) is that techniques that can be used are very version/device dependent, thus there is neither a generalized method nor all-time working techniques. Android keyguard is basically an Android application whose window lives on a high window layer with the possibility of intercepting navigation buttons, in order to produce the lock effect. Each unlock method (PIN, password, pattern and face unlock) is a view component implementation hosted by the KeyguardHostView view container class. All of the methods/modes, used to secure an android device, are activated by setting the current selected mode in the enumerable SecurityMode of the class KeyguardSecurityModel. The following is the KeyguardSecurityModel.SecurityModeimplementation, as seen from Android open source project:     enum SecurityMode {         Invalid, // NULL state         None, // No security enabled         Pattern, // Unlock by drawing a pattern.         Password, // Unlock by entering an alphanumeric password         PIN, // Strictly numeric password         Biometric, // Unlock with a biometric key (e.g. finger print or face unlock)         Account, // Unlock by entering an account's login and password.         SimPin, // Unlock by entering a sim pin.         SimPuk // Unlock by entering a sim puk     } Before starting our bypass and locks cracking techniques, dealing with system files or "system protected files" assumes that the device you are handling meets some requirements: Using Android Debug Bridge (ADB) The device must be rooted USB Debugging should be enabled on the device Booting into a custom recovery mode JTAG/chip-off to acquire a physical bit-by-bit copy Windows Phone forensics Based on Windows NT Kernel, Windows Phone 8.x uses the Core System to boot, manage hardware, authenticate, and communicate on networks. The Core System is a minimal Windows system that contains low-level security features and is supplemented by a set of Windows Phone specific binaries from Mobile Core to handle phone-specific tasks which make it the only distinct architectural entity (From desktop based Windows) in Windows Phone. Windows and Windows Phone are completely aligned at Window Core System and are running exactly the same code at this level. The shared core actually consists of the Windows Core System and Mobile Core where APIs are the same but the code behinds is turned to mobile needs. Similar to most of the mobile operating systems, Windows Phone has a pretty layered architecture; the kernel and OS layers are mainly provided and supported by Microsoft but some layers are provided by Microsoft's partners depending on hardware properties in the form of board support package (BSP), which usually consists of a set of drivers and support libraries that ensure low-level hardware interaction and boot process created by the CPU supplier, then comes the original equipment manufacturers (OEMs) and independent hardware vendors (IHVs) that write the required drivers to support the phone hardware and specific component. Following this is a high level diagram describing Windows Phone architecture organized by layer and ownership: There are three main partitions on a Windows Phone that are forensically interesting: MainOS, Data, and Removable User Data (not visible on the preceding screenshot since Lumia 920 does not support SD cards) partitions; as their respective names suggest, the MainOS partition contains all Windows Phone operating system components, Data partition stores all user's data, third-party applications and all application's states. The Removable User Data partition is considered by Windows Phone as a separate volume and refers to all data stored in the SD Card (on devices that supports SD cards). Each of the previously named partitions respects a folder layout and can be mapped to their root folders with predefined Access Control Lists (ACL). Each ACL is in the form of a list of access control entries (ACE) and each ACE identifies the user account to which it applies (trustee) and specifies the access right allowed, denied or audited for that trustee. Windows Phone 8.1 is an extremely challenging and different; forensic tools and techniques should be used in order to gather evidences. One of the interesting techniques is side loading, where an agent to extract contacts and appointments from a WP8.1 device. To extract phonebook and appointments entries we will use WP Logical, which is a contacts and appointments acquisition tool designed to run under Windows Phone 8.1, once deployed and executed will create a folder with the name WPLogical_MDY__HMMSS_PM/AM under the public folder PhonePictures where M=Month, D=Day, Y=Year, H=hour, MM=Minutes and SS= Seconds of the extraction date. Inside the created folder you can find appointments__MDY__HMMSS_PM/AM.html and contacts_MDY__HMMSS_PM/AM.html. WP Logical will extract the following information (if found) regarding each appointment starting from 01/01/CurrentYear at 00:00:00 to 31/12/CurrentYear at 00:00:00: Subject Location Organizer Invitees Start time (UTC) Original start time Duration (in hours) Sensitivity Replay time Is organized by user? Is canceled? More details And the following information about each found contact: Display name First name Middle name Last name Phones (types: personal, office, home, and numbers) Important dates Emails (types: personal, office, home, and numbers) Websites Job info Addresses Notes Thumbnail WP Logical also allows the extraction of some device related information, such as Phone time zone, device's friendly name, Store Keeping Unit (SKU), and so on. Windows Phone 8.1 is relatively strict regarding application deployment; WP Logical can be deployed in two ways: Upload the compiled agent to Windows Store and get it signed by Microsoft, after that it will be available in the store for download. Deploy the agent directly to a developer unlocked device using Windows Phone Application Deployment utility. Summary In this article, we looked at forensics for iOS and Android devices. We also looked at some low-level forensic techniques. Resources for Article: Further resources on this subject: Mobile Forensics and Its Challanges [article] Introduction to Mobile Forensics [article] Forensics Recovery [article]
Read more
  • 0
  • 0
  • 18661
Modal Close icon
Modal Close icon