Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-gradient-descent-work
Packt
03 Feb 2016
11 min read
Save for later

Gradient Descent at Work

Packt
03 Feb 2016
11 min read
In this article by Alberto Boschetti and Luca Massaron authors of book Regression Analysis with Python, we will learn about gradient descent, its feature scaling and a simple implementation. (For more resources related to this topic, see here.) As an alternative from the usual classical optimization algorithms, the gradient descent technique is able to minimize the cost function of a linear regression analysis using much less computations. In terms of complexity, gradient descent ranks in the order O(n*p), thus making learning regression coefficients feasible even in the occurrence of a large n (that stands for the number of observations) and large p (number of variables). The method works by leveraging a simple heuristic that gradually converges to the optimal solution starting from a random one. Explaining it using simple words, it resembles walking blind in the mountains. If you want to descend to the lowest valley, even if you don't know and can't see the path, you can proceed approximately by going downhill for a while, then stopping, then directing downhill again, and so on, always directing at each stage where the surface descends until you arrive at a point when you cannot descend anymore. Hopefully, at that point, you will have reached your destination. In such a situation, your only risk is to pass by an intermediate valley (where there is a wood or a lake for instance) and mistake it for your desired arrival because the land stops descending there. In an optimization process, such a situation is defined as a local minimum (whereas your target is a global minimum, instead of the best minimum possible) and it is a possible outcome of your journey downhill depending on the function you are working on minimizing. The good news, in any case, is that the error function of the linear model family is a bowl-shaped one (technically, our cost function is a concave one) and it is unlikely that you can get stuck anywhere if you properly descend. The necessary steps to work out a gradient-descent-based solution are hereby described. Given our cost function for a set of coefficients (the vector w): We first start by choosing a random initialization for w by choosing some random numbers (taken from a standardized normal curve, for instance, having zero mean and unit variance). Then, we start reiterating an update of the values of w (opportunely using the gradient descent computations) until the marginal improvement from the previous J(w) is small enough to let us figure out that we have finally reached an optimum minimum. We can opportunely update our coefficients, separately one by one, by subtracting from each of them a portion alpha (α, the learning rate) of the partial derivative of the cost function: Here, in our formula, wj is to be intended as a single coefficient (we are iterating over them). After resolving the partial derivative, the final resolution form is: Simplifying everything, our gradient for the coefficient of x is just the average of our predicted values multiplied by their respective x value. We have to notice that by introducing more parameters to be estimated during the optimization procedure, we are actually introducing more dimensions to our line of fit (turning it into a hyperplane, a multidimensional surface) and such dimensions have certain communalities and differences to be taken into account. Alpha, called the learning rate, is very important in the process, because if it is too large, it may cause the optimization to detour and fail. You have to think of each gradient as a jump or as a run in a direction. If you fully take it, you may happen to pass over the optimum minimum and end up in another rising slope. Too many consecutive long steps may even force you to climb up the cost slope, worsening your initial position (given by a cost function that is its summed square, the loss of an overall score of fitness). Using a small alpha, the gradient descent won't jump beyond the solution, but it may take much longer to reach the desired minimum. How to choose the right alpha is a matter of trial and error. Anyway, starting from an alpha, such as 0.01, is never a bad choice based on our experience in many optimization problems. Naturally, the gradient, given the same alpha, will in any case produce shorter steps as you approach the solution. Visualizing the steps in a graph can really give you a hint about whether the gradient descent is working out a solution or not. Though quite conceptually simple (it is based on an intuition that we surely applied ourselves to move step by step where we can optimizing our result), gradient descent is very effective and indeed scalable when working with real data. Such interesting characteristics elevated it to be the core optimization algorithm in machine learning, not being limited to just the linear model's family, but also, for instance, extended to neural networks for the process of back propagation that updates all the weights of the neural net in order to minimize the training errors. Surprisingly, the gradient descent is also at the core of another complex machine learning algorithm, the gradient boosting tree ensembles, where we have an iterative process minimizing the errors using a simpler learning algorithm (a so-called weak learner because it is limited by an high bias) for progressing toward the optimization. Scikit-learn linear_regression and other linear models present in the linear methods module are actually powered by gradient descent, making Scikit-learn our favorite choice while working on data science projects with large and big data. Feature scaling While using the classical statistical approach, not the machine learning one, working with multiple features requires attention while estimating the coefficients because of their similarities that can cause a variance inflection of the estimates. Moreover, multicollinearity between variables also bears other drawbacks because it can render very difficult, if not impossible to achieve, matrix inversions, the matrix operation at the core of the normal equation coefficient estimation (and such a problem is due to the mathematical limitation of the algorithm). Gradient descent, instead, is not affected at all by reciprocal correlation, allowing the estimation of reliable coefficients even in the presence of perfect collinearity. Anyway, though being quite resistant to the problems that affect other approaches, gradient descent's simplicity renders it vulnerable to other common problems, such as the different scale present in each feature. In fact, some features in your data may be represented by the measurements in units, some others in decimals, and others in thousands, depending on what aspect of reality each feature represents. For instance, in the dataset we decide to take as an example, the Boston houses dataset (http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html), a feature is the average number of rooms (a float ranging from about 5 to over 8), others are the percentage of certain pollutants in the air (float between 0 and 1), and so on, mixing very different measurements. When it is the case that the features have a different scale, though the algorithm will be processing each of them separately, the optimization will be dominated by the variables with the more extensive scale. Working in a space of dissimilar dimensions will require more iterations before convergence to a solution (and sometimes, there could be no convergence at all). The remedy is very easy; it is just necessary to put all the features on the same scale. Such an operation is called feature scaling. Feature scaling can be achieved through standardization or normalization. Normalization rescales all the values in the interval between zero and one (usually, but different ranges are also possible), whereas standardization operates removing the mean and dividing by the standard deviation to obtain a unit variance. In our case, standardization is preferable both because it easily permits retuning the obtained standardized coefficients into their original scale and because, centering all the features at the zero mean, it makes the error surface more tractable by many machine learning algorithms, in a much more effective way than just rescaling the maximum and minimum of a variable. An important reminder while applying feature scaling is that changing the scale of the features implies that you will have to use rescaled features also for predictions. A simple implementation Let's try the algorithm first using the standardization based on the Scikit-learn preprocessing module: import numpy as np import random from sklearn.datasets import load_boston from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression   boston = load_boston() standardization = StandardScaler() y = boston.target X = np.column_stack((standardization.fit_transform(boston.data), np.ones(len(y)))) In the preceding code, we just standardized the variables using the StandardScaler class from Scikit-learn. This class can fit a data matrix, record its column means and standard deviations, and operate a transformation on itself as well as on any other similar matrixes, standardizing the column data. By means of this method, after fitting, we keep a track of the means and standard deviations that have been used because they will come handy if afterwards we will have to recalculate the coefficients using the original scale. Now, we just record a few functions for the following computations: def random_w(p):     return np.array([np.random.normal() for j in range(p)])   def hypothesis(X, w):     return np.dot(X,w)   def loss(X, w, y):     return hypothesis(X, w) - y   def squared_loss(X, w, y):     return loss(X, w, y)**2   def gradient(X, w, y):     gradients = list()     n = float(len(y))     for j in range(len(w)):         gradients.append(np.sum(loss(X, w, y) * X[:,j]) / n)     return gradients   def update(X, w, y, alpha=0.01):     return [t - alpha*g for t, g in zip(w, gradient(X, w, y))]   def optimize(X, y, alpha=0.01, eta = 10**-12, iterations = 1000):     w = random_w(X.shape[1])     for k in range(iterations):         SSL = np.sum(squared_loss(X,w,y))         new_w = update(X,w,y, alpha=alpha)         new_SSL = np.sum(squared_loss(X,new_w,y))         w = new_w         if k>=5 and (new_SSL - SSL <= eta and new_SSL - SSL >= -eta):             return w     return w We can now calculate our regression coefficients: w = optimize(X, y, alpha = 0.02, eta = 10**-12, iterations = 20000) print ("Our standardized coefficients: " +   ', '.join(map(lambda x: "%0.4f" % x, w))) Our standardized coefficients: -0.9204, 1.0810, 0.1430, 0.6822, -2.0601, 2.6706, 0.0211, -3.1044, 2.6588, -2.0759, -2.0622, 0.8566, -3.7487, 22.5328 A simple comparison with Scikit-learn's solution can prove if our code worked fine: sk=LinearRegression().fit(X[:,:-1],y) w_sk = list(sk.coef_) + [sk.intercept_] print ("Scikit-learn's standardized coefficients: " + ', '.join(map(lambda x: "%0.4f" % x, w_sk))) Scikit-learn's standardized coefficients: -0.9204, 1.0810, 0.1430, 0.6822, -2.0601, 2.6706, 0.0211, -3.1044, 2.6588, -2.0759, -2.0622, 0.8566, -3.7487, 22.5328 A noticeable particular to mention is our choice of alpha. After some tests, the value of 0.02 has been chosen for its good performance on this very specific problem. Alpha is the learning rate and, during optimization, it can be fixed or changed according to a line search method, modifying its value in order to minimize the cost function at each single step of the optimization process. In our example, we opted for a fixed learning rate and we had to look for its best value by trying a few optimization values and deciding on which minimized the cost in the minor number of iterations. Summary In this article we learned about gradient descent, its feature scaling and a simple implementation using an algorithm based on Scikit-learn preprocessing module. Resources for Article:   Further resources on this subject: Optimization Techniques [article] Saving Time and Memory [article] Making Your Data Everything It Can Be [article]
Read more
  • 0
  • 0
  • 3884

article-image-customizing-ipython
Packt
03 Feb 2016
9 min read
Save for later

Customizing IPython

Packt
03 Feb 2016
9 min read
In this article written by Cyrille Rossant, author of Learning IPython for Interactive Computing and Data Visualization - Second edition, we look at how the Jupyter Notebook is a highly-customizable platform. You can configure many aspects of the software and can extend the backend (kernels) and the frontend (HTML-based Notebook). This allows you to create highly-personalized user experiences based on the Notebook. In this article, we will cover the following topics: Creating a custom magic command in an IPython extension Writing a new Jupyter kernel Customizing the Notebook interface with JavaScript Creating a custom magic command in an IPython extension IPython comes with a rich set of magic commands. You can get the complete list with the %lsmagic command. IPython also allows you to create your own magic commands. In this section, we will create a new cell magic that compiles and executes C++ code in the Notebook. We first import the register_cell_magic function: In [1]: from IPython.core.magic import register_cell_magic To create a new cell magic, we create a function that takes a line (containing possible options) and a cell's contents as its arguments, and we decorate it with @register_cell_magic, as shown here: In [2]: @register_cell_magic def cpp(line, cell): """Compile, execute C++ code, and return the standard output.""" # We first retrieve the current IPython interpreter # instance. ip = get_ipython() # We define the source and executable filenames. source_filename = '_temp.cpp' program_filename = '_temp' # We write the code to the C++ file. with open(source_filename, 'w') as f: f.write(cell) # We compile the C++ code into an executable. compile = ip.getoutput("g++ {0:s} -o {1:s}".format( source_filename, program_filename)) # We execute the executable and return the output. output = ip.getoutput('./{0:s}'.format(program_filename)) print('n'.join(output)) C++ compiler This recipe requires the gcc C++ compiler. On Ubuntu, type sudo apt-get install build-essential in a terminal. On OS X, install Xcode. On Windows, install MinGW (http://www.mingw.org) and make sure that g++ is in your system path. This magic command uses the getoutput() method of the IPython InteractiveShell instance. This object represents the current interactive session. It defines many methods for interacting with the session. You will find the comprehensive list at http://ipython.org/ipython-doc/dev/api/generated/IPython.core.interactiveshell.html#IPython.core.interactiveshell.InteractiveShell. Let's now try this new cell magic. In [3]: %%cpp #include<iostream> int main() { std::cout << "Hello world!"; } Out[3]: Hello world! This cell magic is currently only available in your interactive session. To distribute it, you need to create an IPython extension. This is a regular Python module or package that extends IPython. To create an IPython extension, copy the definition of the cpp() function (without the decorator) to a Python module, named cpp_ext.py for example. Then, add the following at the end of the file: def load_ipython_extension(ipython): """This function is called when the extension is loaded. It accepts an IPython InteractiveShell instance. We can register the magic with the `register_magic_function` method of the shell instance.""" ipython.register_magic_function(cpp, 'cell') Then, you can load the extension with %load_ext cpp_ext. The cpp_ext.py file needs to be in the PYTHONPATH, for example in the current directory. Writing a new Jupyter kernel Jupyter supports a wide variety of kernels written in many languages, including the most-frequently used IPython. The Notebook interface lets you choose the kernel for every notebook. This information is stored within each notebook file. The jupyter kernelspec command allows you to get information about the kernels. For example, jupyter kernelspec list lists the installed kernels. Type jupyter kernelspec --help for more information. At the end of this section, you will find references with instructions to install various kernels including IR, IJulia, and IHaskell. Here, we will detail how to create a custom kernel. There are two methods to create a new kernel: Writing a kernel from scratch for a new language by re-implementing the whole Jupyter messaging protocol. Writing a wrapper kernel for a language that can be accessed from Python. We will use the second, easier method in this section. Specifically, we will reuse the example from the last section to write a C++ wrapper kernel. We need to slightly refactor the last section's code because we won't have access to the InteractiveShell instance. Since we're creating a kernel, we need to put the code in a Python script in a new folder named cpp: In [1]: %mkdir cpp The %%writefile cell magic lets us create a cpp_kernel.py Python script from the Notebook: In [2]: %%writefile cpp/cpp_kernel.py import os import os.path as op import tempfile # We import the `getoutput()` function provided by IPython. # It allows us to do system calls from Python. from IPython.utils.process import getoutput def exec_cpp(code): """Compile, execute C++ code, and return the standard output.""" # We create a temporary directory. This directory will # be deleted at the end of the 'with' context. # All created files will be in this directory. with tempfile.TemporaryDirectory() as tmpdir: # We define the source and executable filenames. source_path = op.join(tmpdir, 'temp.cpp') program_path = op.join(tmpdir, 'temp') # We write the code to the C++ file. with open(source_path, 'w') as f: f.write(code) # We compile the C++ code into an executable. os.system("g++ {0:s} -o {1:s}".format( source_path, program_path)) # We execute the program and return the output. return getoutput(program_path) Out[2]: Writing cpp/cpp_kernel.py Now we create our wrapper kernel by appending some code to the cpp_kernel.py file created above (that's what the -a option in the %%writefile cell magic is for): In [3]: %%writefile -a cpp/cpp_kernel.py """C++ wrapper kernel.""" from ipykernel.kernelbase import Kernel class CppKernel(Kernel): # Kernel information. implementation = 'C++' implementation_version = '1.0' language = 'c++' language_version = '1.0' language_info = {'name': 'c++', 'mimetype': 'text/plain'} banner = "C++ kernel" def do_execute(self, code, silent, store_history=True, user_expressions=None, allow_stdin=False): """This function is called when a code cell is executed.""" if not silent: # We run the C++ code and get the output. output = exec_cpp(code) # We send back the result to the frontend. stream_content = {'name': 'stdout', 'text': output} self.send_response(self.iopub_socket, 'stream', stream_content) return {'status': 'ok', # The base class increments the execution # count 'execution_count': self.execution_count, 'payload': [], 'user_expressions': {}, } if __name__ == '__main__': from ipykernel.kernelapp import IPKernelApp IPKernelApp.launch_instance(kernel_class=CppKernel) Out[3]: Appending to cpp/cpp_kernel.py In production code, it would be best to test the compilation and execution, and to fail gracefully by showing an error. See the references at the end of this section for more information. Our wrapper kernel is now implemented in cpp/cpp_kernel.py. The next step is to create a cpp/kernel.json file describing our kernel: In [4]: %%writefile cpp/kernel.json { "argv": ["python", "cpp/cpp_kernel.py", "-f", "{connection_file}" ], "display_name": "C++" } Out[4]: Writing cpp/kernel.json The argv field describes the command that is used to launch a C++ kernel. More information can be found in the references below. Finally, let's install this kernel with the following command: In [5]: !jupyter kernelspec install --replace --user cpp Out[5]: [InstallKernelSpec] Installed kernelspec cpp in /Users/cyrille/Library/Jupyter/kernels/cpp The --replace option forces the installation even if the kernel already exists. The --user option serves to install the kernel in the user directory. We can test the installation of the kernel with the following command: In [6]: !jupyter kernelspec list Out[6]: Available kernels: cpp python3 Now, C++ notebooks can be created in the Notebook, as shown in the following screenshot: C++ kernel in the Notebook Finally, wrapper kernels can also be used in the IPython terminal or the Qt console, using the --kernel option, for example ipython console --kernel cpp. Here are a few references: Kernel documentation at http://jupyter-client.readthedocs.org/en/latest/kernels.html Wrapper kernels at http://jupyter-client.readthedocs.org/en/latest/wrapperkernels.html List of kernels at https://github.com/ipython/ipython/wiki/IPython%20kernels%20for%20other%20languages bash kernel at https://github.com/takluyver/bash_kernel R kernel at https://github.com/takluyver/IRkernel Julia kernel at https://github.com/JuliaLang/IJulia.jl Haskell kernel at https://github.com/gibiansky/IHaskell Customizing the Notebook interface with JavaScript The Notebook application exposes a JavaScript API that allows for a high level of customization. In this section, we will create a new button in the Notebook toolbar to renumber the cells. The JavaScript API is not stable and not well-documented. Although the example in this section has been tested with IPython 4.0, nothing guarantees that it will work in future versions without changes. The commented JavaScript code below adds a new Renumber button. In [1]: %%javascript // This function allows us to add buttons // to the Notebook toolbar. IPython.toolbar.add_buttons_group([ { // The button's label. 'label': 'Renumber all code cells', // The button's icon. // See a list of Font-Awesome icons here: // http://fortawesome.github.io/Font-Awesome/icons/ 'icon': 'fa-list-ol', // The callback function called when the button is // pressed. 'callback': function () { // We retrieve the lists of all cells. var cells = IPython.notebook.get_cells(); // We only keep the code cells. cells = cells.filter(function(c) { return c instanceof IPython.CodeCell; }) // We set the input prompt of all code cells. for (var i = 0; i < cells.length; i++) { cells[i].set_input_prompt(i + 1); } } }]); Executing this cell displays a new button in the Notebook toolbar, as shown in the following screenshot: Adding a new button in the Notebook toolbar You can use the jupyter nbextension command to install notebook extensions (use the --help option to see the list of possible commands). Here are a few repositories with custom JavaScript extensions contributed by the community: https://github.com/minrk/ipython_extensions https://github.com/ipython-contrib/IPython-notebook-extensions So, we have covered several customization options of IPython and the Jupyter Notebook, but there’s so much more that can be done. Take a look at the IPython Interactive Computing and Visualization Cookbook to learn how to create your own custom widgets in the Notebook.
Read more
  • 0
  • 0
  • 4994

article-image-coreos-networking-and-flannel-internals
Packt
03 Feb 2016
8 min read
Save for later

CoreOS Networking and Flannel Internals

Packt
03 Feb 2016
8 min read
In this article by Sreenivas Makam, author of the book Mastering CoreOS explains how microservices has increased the need to have lots of containers and also connectivity between containers across hosts. It is necessary to have a robust Container networking scheme to achieve this goal. This article will cover the basics of Container networking with a focus on how CoreOS does Container networking with Flannel. (For more resources related to this topic, see here.) Container networking basics The following are the reasons why we need Container networking: Containers need to talk to the external world. Containers should be reachable from the external world so that the external world can use the services that Containers provide. Containers need to talk to the host machine. An example can be sharing volumes. There should be inter-container connectivity in the same host and across hosts. An example is a WordPress container in one host talking to a MySQL container in another host. Multiple solutions are currently available to interconnect Containers. These solutions are pretty new and actively under development. Docker, until release 1.8, did not have a native solution to interconnect Containers across hosts. Docker release 1.9 introduced a Libnetwork-based solution to interconnect containers across hosts as well as do service discovery. CoreOS is using Flannel for container networking in CoreOS clusters. There are projects such as Weave and Calico that are developing Container networking solutions, and they plan to be a networking container plugin for any Container runtime such as Docker or Rkt. Flannel Flannel is an open source project that provides a Container networking solution for CoreOS clusters. Flannel can also be used for non-CoreOS clusters. Kubernetes uses Flannel to set up networking between the Kubernetes pods. Flannel allocates a separate subnet for every host where a Container runs, and the Containers in this host get allocated an individual IP address from the host subnet. An overlay network is set up between each host that allows Containers on different hosts to talk to each other. In Chapter 1, CoreOS Overview covered an overview of the Flannel control and data path. This section will delve into the Flannel internals. Manual installation Flannel can be installed manually or using the systemd unit, flanneld.service. The following command will install flannel in the CoreOS node using a container to build the flannel binary. The flanneld Flannel binary will be available in /home/core/flannel/bin after executing the following commands: git clone https://github.com/coreos/flannel.git docker run -v /home/core/flannel:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build" The following is the Flannel version after we build flannel in our CoreOS node: Installation using flanneld.service Flannel is not installed by default in CoreOS. This is done to keep the CoreOS image size to a minimum. Docker requires flannel to configure the network and flannel requires docker to download the flannel container. To avoid this chicken-and-egg problem, early-docker.service is started by default in CoreOS, whose primary purpose is to download the flannel container and start it. A regular docker.service starts the Docker daemon with the flannel network. The following image shows you the sequence in flanneld.service, where early Docker daemon starts the flannel container, which, in turn starts docker.service with the subnet created by flannel: The following is the relevant section of flanneld.service that downloads the flannel container from the Quay repository: The following output shows the early docker's running containers. Early-docker will manage Flannel only: The following is the relevant section of flanneld.service that updates the docker options to use the subnet created by flannel: The following is the content of flannel_docker_opts.env—in my case—after flannel was started. The address, 10.1.60.1/24, is chosen by this CoreOS node for its containers: Docker will be started as part of docker.service, as shown in the following image, with the preceding environment file: Control path There is no central controller in flannel, and it uses etcd for internode communication. Each node in the CoreOS cluster runs a flannel agent and they communicate with each other using etcd. As part of starting the Flannel service, we specify the Flannel subnet that can be used by the individual nodes in the network. This subnet is registered with etcd so that every CoreOS node in the cluster can see it. Each node in the network picks a particular subnet range and registers atomically with etcd. The following is the relevant section of cloud-config that starts flanneld.service along with specifying the configuration for Flannel. Here, we have specified the subnet to be used for flannel as 10.1.0.0/16 along with the encapsulation type as vxlan: The preceding configuration will create the following etcd key as seen in the node. This shows that 10.1.0.0/16 is allocated for flannel to be used across the CoreOS cluster and that the encapsulation type is vxlan: Once each node gets a subnet, containers started in this node will get an IP address from the IP address pool allocated to the node. The following is the etcd subnet allocation per node. As we can see, all the subnets are in the 10.1.0.0/16 range that was configured earlier with etcd and with a 24-bit mask. The subnet length per host can also be controlled as a flannel configuration option: Let's look at ifconfig of the Flannel interface created in this node. The IP address is in the address range of 10.1.0.0/16: Data path Flannel uses the Linux bridge to encapsulate the packets using an overlay protocol specified in the Flannel configuration. This allows for connectivity between containers in the same host as well as across hosts. The following are the major backends currently supported by Flannel and specified in the JSON configuration file. The JSON configuration file can be specified in the Flannel section of cloud-config: UDP: In UDP encapsulation, packets from containers are encapsulated in UDP with the default port number 8285. We can change the port number if needed. VXLAN: From an encapsulation overhead perspective, VXLAN is efficient when compared to UDP. By default, port 8472 is used for VXLAN encapsulation. If we want to use an IANA-allocated VXLAN port, we need to specify the port field as 4789. AWS-VPC: This is applicable to using Flannel in the AWS VPC cloud. Instead of encapsulating the packets using an overlay, this approach uses a VPC route table to communicate across containers. AWS limits each VPC route table entry to 50, so this can become a problem with bigger clusters. The following is an example of specifying the AWS type in the flannel configuration: GCE: This is applicable to using Flannel in the GCE cloud. Instead of encapsulating the packets using an overlay, this approach uses the GCE route table to communicate across containers. GCE limits each VPC route table entry to 100, so this can become a problem with bigger clusters. The following is an example of specifying the GCE type in the Flannel configuration: Let's create containers in two different hosts with a VXLAN encapsulation and check whether the connectivity is fine. The following example uses a Vagrant CoreOS cluster with the Flannel service enabled. Host 1: Let's start a busybox container: Let's check the IP address allotted to the container. This IP address comes from the IP pool allocated to this CoreOS node by the flannel agent. 10.1.19.0/24 was allocated to host 1 and this container got the 10.1.19.2 address: Host 2: Let's start a busybox container: Let's check the IP address allotted to this container. This IP address comes from the IP pool allocated to this CoreOS node by the flannel agent. 10.1.1.0/24 was allocated to host 2 and this container got the 10.1.1.2 address: The following output shows you the ping being successful between container 1 and container 2. This ping packet is travelling across the two CoreOS nodes and is encapsulated using VXLAN: Flannel as a CNI plugin As explained in Chapter 1, CoreOS Overview, APPC defines a Container specification that any Container runtime can use. For Container networking, APPC defines a Container Network Interface (CNI) specification. With CNI, the Container networking functionality can be implemented as a plugin. CNI expects plugins to support APIs with a set of parameters and the implementation is left to the plugin. Example APIs add a container to a network and remove the container from the network with a defined parameter list. This allows the implementation of network plugins by different vendors and also the reuse of plugins across different Container runtimes. The following image shows the relationship between the RKT container runtime, CNI layer, and Plugin like Flannel. The IPAM Plugin is used to allocate an IP address to the containers and this is nested inside the initial networking plugin: Summary In this chapter, we covered different Container networking technologies with a focus on Container networking in CoreOS. There are many companies trying to solve this Container networking problem. Resources for Article: Further resources on this subject: Network and Data Management for Containers [article] Deploying a Play application on CoreOS and Docker [article] CoreOS – Overview and Installation [article]
Read more
  • 0
  • 0
  • 3537

article-image-going-mobile-first
Packt
03 Feb 2016
16 min read
Save for later

Going Mobile First

Packt
03 Feb 2016
16 min read
In this article by Silvio Moreto Pererira, author of the book, Bootstrap By Example, we will focus on mobile design and how to change the page layout for different viewports, change the content, and more. In this article, you will learn the following: Mobile first development Debugging for any device The Bootstrap grid system for different resolutions (For more resources related to this topic, see here.) Make it greater Maybe you have asked yourself (or even searched for) the reason of the mobile first paradigm movement. It is simple and makes complete sense to speed up your development pace. The main argument for the mobile first paradigm is that it is easier to make it greater than to shrink it. In other words, if you first make a desktop version of the web page (known as responsive design or mobile last) and then go to adjust the website for a mobile, it has a probability of 99% of breaking the layout at some point, and you will have to fix a lot of things in both the mobile and desktop. On the other hand, if you first create the mobile version, naturally the website will use (or show) less content than the desktop version. So, it will be easier to just add the content, place the things in the right places, and create the full responsiveness stack. The following image tries to illustrate this concept. Going mobile last, you will get a degraded, warped, and crappy layout, and going mobile first, you will get a progressively enhanced, future-friendly awesome web page. See what happens to the poor elephant in this metaphor: Bootstrap and themobile first design At the beginning of Bootstrap, there was no concept of mobile first, so it was made to work for designing responsive web pages. However, with the version 3 of the framework, the concept of mobile first was very solid in the community. For doing this, the whole code of the scaffolding system was rewritten to become mobile first from the start. They decided to reformulate how to set up the grid instead of just adding mobile styles. This made a great impact on compatibility between versions older than 3, but was crucial for making the framework even more popular. To ensure the proper rendering of the page, set the correct viewport at the <head> tag: <meta name="viewport" content="width=device-width,   initial-scale=1"> How to debug different viewports in the browser Here, you will learn how to debug different viewports using the Google Chrome web browser. If you already know this, you can skip this section, although it might be important to refresh the steps for this. In the Google Chrome browser, open the Developer tools option. There are many ways to open this menu: Right-click anywhere on the page and click on the last option called Inspect. Go in the settings (the sandwich button on the right-hand side of the address bar), click on More tools, and finally on Developer tools. The shortcut to open it is Ctrl (cmd for OS X users) + Shift + I. F12 in Windows also works (Internet Explorer legacy…). With Developer tools, click on the mobile phone to the left of a magnifier, as showed in the following image: It will change the display of the viewport to a certain device, and you can also set a specific network usage to limit the data bandwidth. Chrome will show a message telling you that for proper visualization, you may need to reload the page to get the correct rendering: For the next image, we have activated the Device mode for an iPhone 5 device. When we set this viewport, the problems start to appear because we did not make the web page with the mobile first methodology. Bootstrap scaffolding for different devices Now that we know more about mobile first development and its important role in Bootstrap starting from version 3, we will cover Bootstrap usage for different devices and viewports. For doing this, we must apply the column class for the specific viewport, for example, for medium displays, we use the .col-md-*class. The following table presents the different classes and resolutions that will be applied for each viewport class:   Extra small devices (Phones < 544 px / 34 em) Small devices (Tablets ≥ 544 px / 34 em and < 768 px / 48 em) Medium devices (Desktops ≥ 768 px /48 em < 900px / 62 em) Large devices (Desktops ≥ 900 px / 62 em < 1200px 75 em) Extra large devices (Desktops ≥ 1200 px / 75 em) Grid behavior Horizontal lines at all times Collapse at start and fit column grid Container fixed width Auto 544px or 34rem 750px or 45rem 970px or 60rem 1170px or 72.25rem Class prefix .col-xs-* .col-sm-* .col-md-* .col-lg-* .col-xl-* Number of columns 12 columns Column fixed width Auto ~ 44px or 2.75 rem ~ 62px or 3.86 rem ~ 81px or 5.06 rem ~ 97px or 6.06 rem Mobile and extra small devices To exemplify the usage of Bootstrap scaffolding in mobile devices, we will have a predefined web page and want to adapt it to mobile devices. We will be using the Chrome mobile debug tool with the device, iPhone 5. You may have noticed that for small devices, Bootstrap just stacks each column without referring for different rows. In the layout, some of the Bootstrap rows may seem fine in this visualization, although the one in the following image is a bit strange as the portion of code and image are not in the same line, as it supposed to be: To fix this, we need to add the class column's prefix for extra small devices, which is .col-xs-*, where * is the size of the column from 1 to 12. Add the .col-xs-5 class and .col-xs-7 for the columns of this respective row. Refresh the page and you will see now how the columns are put side by side: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive">   </div> </div> Although the image of the web browser is too small on the right, it would be better if it was a vertical stretched image, such as a mobile phone. (What a coincidence!) To make this, we need to hide the browser image in extra small devices and display an image of the mobile phone. Add the new mobile image below the existing one as follows. You will see both images stacked up vertically in the right column: <img src="imgs/center.png" class="img-responsive"> <img src="imgs/mobile.png" class="img-responsive"> Then, we need to use the new concept of availability classes present in Bootstrap. We need to hide the browser image and display the mobile image just for this kind of viewport, which is extra small. For this, add the .hidden-xs class to the browser image and add the .visible-xs class to the mobile image: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive hidden-xs">     <img src="imgs/mobile.png" class="img-responsive visible-xs">   </div> </div> Now this row seems nice! With this, the browser image was hidden in extra small devices and the mobile image was shown for this viewport in question. The following image shows the final display of this row: Moving on, the next Bootstrap .row contains a testimonial surrounded by two images. It would be nicer if the testimonial appeared first and both images were displayed after it, splitting the same row, as shown in the following image. For this, we will repeat almost the same techniques presented in the last example: The first change is to hide the Bootstrap image using the .hidden-xs class. After this, create another image tag with the Bootstrap image in the same column of the PACKT image. The final code of the row should be as follows: <div class="row">   <div class="col-md-3 hidden-xs">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 visible-xs">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> We did plenty of things now; all the changes are highlighted. The first is the .hidden-xs class in the first column of the Bootstrap image, which hid the column for this viewport. Afterward, in the testimonial, we changed the grid for the mobile, adding a column offset with size 1 and making the testimonial fill the rest of the row with the .col-xs-11 class. Lastly, like we said, we want to split both images from PACKT and Bootstrap in the same row. For this, make the first image column fill seven columns with the .col-xs-7 class. The other image column is a little more complicated. As it is visible just for mobile devices, we add the .col-xs-5 class, which will make the element span five columns in extra small devices. Moreover, we hide the column for other viewports with the .visible-xs class. As you can see, this row has more than 12 columns (one offset, 11 testimonials, seven PACKT images, and five Bootstrap images). This process is called column wrapping and happens when you put more than 12 columns in the same row, so the groups of extra columns will wrap to the next lines. Availability classes Just like .hidden-*, there are the .visible-*-*classes for each variation of the display and column from 1 to 12. There is also a way to change the display of the CSS property using the .visible-*-* class, where the last * means block, inline, or inline-block. Use this to set the proper visualization for different visualizations. The following image shows you the final result of the changes. Note that we made the testimonial appear first, with one column of offset, and both images appear below it: Tablets and small devices Completing the mobile visualization devices, we move on to tablets and small devices, which are devices from 544 px (34 em) to 768 px (48 em). Most of this kind of devices are tablets or old desktops monitors. To work with this example, we are using the iPad mini in the portrait position. For this resolution, Bootstrap handles the rows just as in extra small devices by stacking up each one of the columns and making them fill the total width of the page. So, if we do not want this to happen, we have to set the column fill for each element with the .col-sm-* class manually. If you see now how our example is presented, there are two main problems. The first one is that the heading is in separate lines, whereas they could be in the same line. For this, we just need to apply the grid classes for small devices with the .col-sm-6 class for each column, splitting them into equal sizes: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> The result should be as follows: The second problem in this viewport is again the testimonial row! Due to the classes that we have added for the mobile viewport, the testimonial now has an offset column and different column span. We must add the classes for small devices and make this row with the Bootstrap image on the left, the testimonial in the middle, and the PACKT image on the right: <div class="row">   <div class="col-md-3 hidden-xs col-sm-3">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11     col-sm-6 col-sm-offset-0">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7 col-sm-3">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 hidden-sm hidden-md hidden-lg">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> As you can see, we had to reset the column offset in the testimonial column. It happened because it kept the offset that we had added for extra small devices. Moreover, we are just ensuring that the image columns had to fill just three columns with the .col-sm-3 classes in both the images. The result of the row is as follows: Everything else seems fine! These viewports were easier to set up. See how Bootstrap helps us a lot? Let's move on to the final viewport, which is a desktop or large devices. Desktop and large devices Last but not least, we enter the grid layout for a desktop and large devices. We skipped medium devices because we coded first for this viewport. Deactivate the Device mode in Chrome and put your page in a viewport with a width larger or equal to 1200 pixels. The grid prefix that we will be using is .col-lg-*, and if you take a look at the page, you will see that everything is well placed and we don't need to make changes! (Although we would like to make some tweaks to make our layout fancier and learn some stuff about the Bootstrap grid.) We want to talk about a thing called column ordering. It is possible to change the order of the columns in the same row by applying the.col-lg-push-* and .col-lg-pull-* classes. (Note that we are using the large devices prefix, but any other grid class prefix can be used.) The .col-lg-push-* class means that the column will be pushed to the right by the * columns, where * is the number of columns pushed. On the other hand, .col-lg-pull-* will pull the column in the left direction by *, where * is the number of columns as well. Let's test this trick in the second row by changing the order of both the columns: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6 col-lg-push-4">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6 col-lg-pull-4">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> We just added the .col-lg-push-4 class to the first column and .col-lg-pull-4 to the other one to get this result. By doing this, we have changed the order of both the columns in the second row, as shown in the following image: Summary In this article, you learned a little about the mobile first development and how Bootstrap can help us in this task. We started from an existing Bootstrap template, which was not ready for mobile visualization, and fixed that. While fixing, we used a lot of Bootstrap scaffolding properties and Bootstrap helpers, making it much easier to fix anything. We did all of this without a single line of CSS or JavaScript; we used only Bootstrap with its inner powers! Resources for Article:   Further resources on this subject: Bootstrap in a Box [article] The Bootstrap grid system [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 8945

article-image-android-and-ios-apps-testing-glance
Packt
02 Feb 2016
21 min read
Save for later

Android and iOS Apps Testing at a Glance

Packt
02 Feb 2016
21 min read
In this article by Vijay Velu, the author of Mobile Application Penetration Testing, we will discuss the current state of mobile application security and the approach to testing for vulnerabilities in mobile devices. We will see the major players in the smartphone OS market and how attackers target users through apps. We will deep-dive into the architecture of Android and iOS to understand the platforms and its current security state, focusing specifically on the various vulnerabilities that affect apps. We will have a look at the Open Web Application Security Project (OWASP) standard to classify these vulnerabilities. The readers will also get an opportunity to practice the security testing of these vulnerabilities via the means of readily available vulnerable mobile applications. The article will have a look at the step-by-step setup of the environment that's required to carry out security testing of mobile applications for Android and iOS. We will also explore the threats that may arise due to potential vulnerabilities and learn how to classify them according to their risks. (For more resources related to this topic, see here.) Smartphones' market share Understanding smartphones' market share will give us a clear picture about what cyber criminals are after and also what could be potentially targeted. The mobile application developers can propose and publish their applications on the stores and be rewarded by a revenue share of the selling price. The following screenshot that was taken from www.idc.com provides us with the overall smartphone OS market in 2015: Since mobile applications are platform-specific, majority of the software vendors are forced to develop applications for all the available operating systems. Android operating system Android is an open source, Linux-based operating system for mobile devices (smartphones and tablet computers). It was developed by the Open Handset Alliance, which was led by Google and other companies. Android OS is Linux-based. It can be programmed in C/C++, but most of the application development is done in Java (Java accesses C libraries via JNI, which is short for Java Native Interface). iPhone operating system (iOS) It was developed by Apple Inc. It was originally released in 2007 for iPhone, iPod Touch, and Apple TV. Apple's mobile version of the OS X operating system that's used in Apple computers is iOS. Berkeley Software Distribution (BSD) is UNIX-based and can be programmed in Objective C. Public Android and iOS vulnerabilities Before we proceed with different types of vulnerabilities on Android and iOS, this section introduces you to Android and iOS as an operating system and covers various fundamental concepts that need to be understood to gain experience in mobile application security. The following table comprises year-wise operating system releases: Year Android iOS 2007/2008 1.0 iPhone OS 1 iPhone OS 2 2009 1.1 iPhone OS 3 1.5 (Cupcake) 2.0 (Eclair) 2.0.1(Eclair) 2010 2.1 (Eclair) iOS 4 2.2 (Froyo) 2.3-2.3.2(Gingerbread) 2011 2.3.4-2.3.7 (Gingerbread) iOS 5 3.0 (HoneyComb) 3.1 (HoneyComb) 3.2 (HoneyComb) 4.0-4.0.2 (Ice Cream Sandwich) 4.0.3-4.0.4 (Ice Cream Sandwich) 2012 4.1 (Jelly Bean) iOS 6 4.2 (Jelly Bean) 2013 4.3 (Jelly bean) iOS 7 4.4 (KitKat) 2014 5.0 (Lollipop) iOS 8 5.1 (Lollipop) 2015   iOS 9 (beta) An interesting research conducted by Hewlett Packard (HP), a software giant that tested more than 2,000 mobile applications from more than 600 companies, has reported the following statistics (for more information, visit http://www8.hp.com/h20195/V2/GetPDF.aspx/4AA5-1057ENW.pdf): 97% of the applications that were tested access at least one private information source of these applications 86% of the applications failed to use simple binary-hardening protections against modern-day attacks 75% of the applications do not use proper encryption techniques when storing data on a mobile device 71% of the vulnerabilities resided on the web server 18% of the applications sent usernames and password over HTTP (of the remaining 85%, 18% implemented SSL/HTTPS incorrectly) So, the key vulnerabilities to mobile applications arise due to a lack of security awareness, "usability versus security trade-off" by developers, excessive application permissions, and a lack of privacy concerns. Coupling this with a lack of sufficient application documentation leads to vulnerabilities that developers are not aware of. Usability versus security trade-off For every developer, it would not be possible to provide users with an application with high security and usability. Making any application secure and usable takes a lot of effort and analytical thinking. Mobile application vulnerabilities are broadly categorized as follows: Insecure transmission of data: Either an application does not enforce any kind of encryption for data in transit on a transport layer, or the implemented encryption is insecure. Insecure data storage: Apps may store data either in a cleartext or obfuscated format, or hard-coded keys in the mobile device. An example e-mail exchange server configuration on Android device that uses an e-mail client stores the username and password in cleartext format, which is easy to reverse by any attacker if the device is rooted. Lack of binary protection: Apps do not enforce any anti-reversing, debugging techniques. Client-side vulnerabilities: Apps do not sanitize data provided from the client side, leading to multiple client-side injection attacks such as cross-site scripting, JavaScript injection, and so on. Hard-coded passwords/keys: Apps may be designed in such a way that hard-coded passwords or private keys are stored on the device storage. Leakage of private information: Apps may unintentionally leak private information. This could be due to the use of a particular framework and obscurity assumptions of developers. Android vulnerabilities In July 2015, a security company called Zimperium announced that it discovered a high-risk vulnerability named Stagefright inside the Android operating system. They deemed it as a unicorn in the world of Android risk, and it was practically demonstrated in one of the hacking conferences in the US on August 5, 2015. More information can be found at https://blog.zimperium.com/stagefright-vulnerability-details-stagefright-detector-tool-released/; a public exploit is available at https://www.exploit-db/exploits/38124/. This has made Google release security patches for all Android operating systems, which is believed to be 95% of the Android devices, which is an estimated 950 million users. The vulnerability is exploited through a particular library, which can let attackers take control of an Android device by sending a specifically crafted multimedia services like Multimedia Messaging Service (MMS). If we take a look at the superuser application downloads from the Play Store, there are around 1 million to 5 million downloads. It can be assumed that a major portion of Android smartphones are rooted. The following graphs show the Android vulnerabilities from 2009 until September 2015. There are currently 54 reported vulnerabilities for the Android Google operating system (for more information, visit http://www.cvedetails.com/product/19997/Google-Android.html?vendor_id=1224). More features that are introduced to the operating system in the form of applications act as additional entry points that allow cyber attackers or security researchers to circumvent and bypass the controls that were put in place. iOS vulnerabilities On June 18, 2015, password stealing vulnerability, also known as Cross Application Reference Attack (XARA), was outlined for iOS and OS X. It cracked the keychain services on jailbroken and non-jailbroken devices. The vulnerability is similar to cross-site request forgery attack in web applications. In spite of Apple's isolation protection and its App Store's security vetting, it was possible to circumvent the security controls mechanism. It clearly provided the need to protect the cross-app mechanism between the operating system and the app developer. Apple rolled a security update week after the XARA research. More information can be found at http://www.theregister.co.uk/2015/06/17/apple_hosed_boffins_drop_0day_mac_ios_research_blitzkrieg/ The following graphs show the vulnerabilities in iOS from 2007 until September 2015. There are around 605 reported vulnerabilities for Apple iPhone OS (for more information, visit http://www.cvedetails.com/product/15556/Apple-Iphone-Os.html?vendor_id=49). As you can see, the vulnerabilities kept on increasing year after year. A majority of the vulnerabilities reported are denial-of-service attacks. This vulnerability makes the application unresponsive. Primarily, the vulnerabilities arise due to insecure libraries or overwriting with plenty of buffer in the stacks. Rooting/jailbreaking Rooting/jailbreaking refers to the process of removing the limitations imposed by the operating system on devices through the use of exploit tools. Rooting/jailbreaking enables users to gain complete control over the operating system of a device. OWASP's top ten mobile risks In 2013, OWASP polled the industry for new vulnerability statistics in the field of mobile applications. The following risks were finalized in 2014 as the top ten dangerous risks as per the result of the poll data and mobile application threat landscape: M1: Weak server-side controls: Internet usage via mobiles has surpassed fixed Internet access. This is largely due to the emergence of hybrid and HTML5 mobile applications. Application servers that form the backbone of these applications must be secured on their own. The OWASP top 10 web application project defines the most prevalent vulnerabilities in this realm. Vulnerabilities such as injections, insecure direct object reference, insecure communication, and so on may lead to the complete compromise of an application server. Adversaries who have gained control over the compromised servers can push malicious content to all the application users and compromise user devices as well. M2: Insecure data storage: Mobile applications are being used for all kinds of tasks such as playing games, fitness monitors, online banking, stock trading, and so on, and most of the data used by these applications are either stored in the device itself inside SQLite files, XML data stores, log files, and so on, or they are pushed on to Cloud storage. The types of sensitive data stored by these applications may range from location information to bank account details. The application programing interfaces (API) that handle the storage of this data must securely implement encryption/hashing techniques so that an adversary with direct access to these data stores via theft or malware will not be able to decipher the sensitive information that's stored in them. M3: Insufficient transport layer protection: "Insecure Data Storage", as the name says, is about the protection of data in storage. But as all the hybrid and HTML 5 apps work on client-server architecture, emphasis on data in motion is a must, as the data will have to traverse through various channels and will be susceptible to eavesdropping and tampering by adversaries. Controls such as SSL/TLS, which enforce confidentiality and integrity of data, must be verified for correct implementations on the communication channel from the mobile application and its server. M4: Unintended data leakage: Certain functionalities of mobile applications may place users' sensitive data in locations where it can be accessed by other applications or even by malware. These functionalities may be there in order to enhance the usability or user experience but may pose adverse effects in the long run. Actions such as OS data caching, key press logging, copy/paste buffer caching, and implementations of web beacons or analytics cookies for advertisement delivery can be misused by adversaries to gain information about users. M5: Poor authorization and authentication: As mobile devices are the most "personal" devices, developers utilize this to store important data such as credentials locally in the device itself and come up with specific mechanisms to authenticate and authorize users locally for the services that users request via the application. If these mechanisms are poorly developed, adversaries may circumvent these controls and unauthorized actions can be performed. As the code is available to adversaries, they can perform binary attacks and recompile the code to directly access authorized content. M6: Broken cryptography: This is related to the weak controls that are used to protect data. Using weak cryptographic algorithms such as RC2, MD5, and so on, which can be cracked by adversaries, will lead to encryption failure. Improper encryption key management when a key is stored in locations accessible to other applications or the use of a predictable key generation technique will also break the implemented cryptography techniques. M7: Client-side injection: Injection vulnerabilities are the most common web vulnerabilities according to OWASP web top 10 dangerous risks. These are due to malformed inputs, which cause unintended action such as an alteration of database queries, command execution, and so on. In case of mobile applications, malformed inputs can be a serious threat at the local application level and server side as well (refer to M1: Weak server-side controls). Injections at a local application level, which mainly target data stores, may result in conditions such as access to paid content that's locked for trial users or file inclusions that may lead to an abuse of functionalities such as SMSes. M8: Security decisions via untrusted inputs: An implementation of certain functionalities such as the use of hidden variables to check authorization status can be bypassed by tampering them during the transit via web service calls or inter-process communication calls. This may lead to privilege escalations and unintended behavior of mobile applications. M9: Improper session handling: The application server sends back a session token on successful authentication with the mobile application. These session tokens are used by the mobile application to request for services. If these session tokens remain active for a longer duration and adversaries obtain them via malware or theft, the user account can be hijacked. M10: Lack of binary protection: A mobile application's source code is available to all. An attacker can reverse engineer the application and insert malicious code components and recompile them. If these tampered applications are installed by a user, they will be susceptible to data theft and may be the victims of unintended actions. Most applications do not ship with mechanisms such as checksum controls, which help in deducing whether the application is tampered or not. In 2015, there was another poll under the OWASP Mobile security group named the "umbrella project". This leads us to have M10 to M2, the trends look at binary protection to take over weak server-side controls. However, we will have wait until the final list for 2015. More details can be found at https://www.owasp.org/images/9/96/OWASP_Mobile_Top_Ten_2015_-_Final_Synthesis.pdf. Vulnerable applications to practice The open source community has been proactively designing plenty of mobile applications that can be utilized for practical tests. These are specifically designed to understand the OWASP top ten risks. Some of these applications are as follows: iMAS: iMAS is a collaborative research project initiated by the MITRE corporation (http://www.mitre.org/). This is for application developers and security researchers who would like to learn more about attack and defense techniques in iOS. More information about iMAS can be found at https://github.com/project-imas/about. GoatDroid: A simple functional mobile banking application for training with location tracking developed by Jack and Ken for Android application security is a great starting point for beginners. More information about GoatDroid can be found at https://github.com/jackMannino/OWASP-GoatDroid-Project. iGoat: The OWASP's iGOAT project is similar to the WebGoat web application framework. It's designed to improve the iOS assessment techniques for developers. More information on iGoat can be found at https://code.google.com/p/owasp-igoat/. Damn Vulnerable iOS Application (DVIA): DVIA is an iOS application that provides a platform for developers, testers, and security researchers to test their penetration testing skills. This application covers all the OWASP's top 10 mobile risks and also contains several challenges that one can solve and come up with custom solutions. More information on the Damn Vulnerable iOS Application can be found at http://damnvulnerableiosapp.com/. MobiSec: MobiSec is a live environment for the penetration testing of mobile environments. This framework provides devices, applications, and supporting infrastructure. It provides a great exercise for testers to view vulnerabilities from different points of view. More information on MobiSec can be found at http://sourceforge.net/p/mobisec/wiki/Home/. Android application sandboxing Android utilizes the well-established Linux protection ring model to isolate applications from each other. In Linux OS, assigning unique ID segregates every user. This ensures that there is no cross account data access. Similarly in Android OS, every app is assigned with its own unique ID and is run as a separate process. As a result, an application sandbox is formed at the kernel level, and the application will only be able to access the resources for which it is permitted to access. This subsequently ensures that the app does not breach its work boundaries and initiate any malicious activity. For example, the following screenshot provides an illustration of the sandbox mechanism: From the preceding Android Sandbox illustration, we can see how the unique Linux user ID created per application is validated every time a resource mapped to the app is accessed, thus ensuring a form of access control. Android Studio and SDK On May 16, 2013 at the Google I/O conference, an Integrated Development Environment (IDE) was released by Katherine Chou under Apache license 2.0; it was called Android Studio and it's used to develop apps on the Android platform. It entered the beta stage in 2014, and the first stable release was on December 2014 from Version 1.0 and it has been announced the official IDE on September 15, 2015. Information on Android Studio and SDK is available at http://developer.android.com/tools/studio/index.html#build-system. Android Studio and SDK heavily depends on the Java SE Development Kit. Java SE Development Kit can be downloaded at http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. Some developers prefer different IDEs such as eclipse. For them, Google only offers SDK downloads (http://dl.google.com/android/installer_r24.4.1-windows.exe). There are minimum system requirements that need to be fulfilled in order to install and use the Android Studio effectively. The following procedure is used to install the Android Studio on Windows 7 Professional 64-bit Operating System with 4 GB RAM, 500 Gig Hard Disk Space, and Java Development Kit 7 installed: Install the IDE available for Linux, Windows, and Mac OS X. Android Studio can be downloaded by visiting http://developer.android.com/sdk/index.html. Once the Android Studio is downloaded, run the installer file. By default, an installation window will be shown, as shown in the following screenshot. Click on Next: This setup will automatically check whether the system meets the requirements. Choose all the components that are required and click on Next. It is recommended to read and accept the license and click on Next. It is always recommended to create a new folder to install the tools that will help us track all the evidence in a single place. In this case, we have created a folder called Hackbox in C:, as shown in the following screenshot: Now, we can allocate the space required for the Android-accelerated environment, which will provide better performance. So, it is recommended to allocate a minimum of 2GB for this space. All the necessary files will be extracted to C:Hackbox. Once the installation is complete, you will be able to launch Android Studio, as shown in the following screenshot: Android SDK Android SDK provides developers with the ability to completely build, test, and debug apps that run on the Android platform. It has all the relevant software libraries, APIs, system images of the emulators, documentations, and other tools that help create an Android app. We have installed Android Studio with Android SDK. It is crucial to understand how to utilize the in-built SDK tools as much as possible. This section provides an overview of some of the critical tools that we will be using when attacking an Android app during the penetration testing activity. Emulator, simulators, and real devices Sometimes, we tend to believe that all virtual emulations work in exactly the same way in real devices, which is not really the case. Especially for Android, we have multiple OEMs manufacturing multiple devices, with different chipsets running different versions of Android. It would be challenge for developers to ensure that all the functionalities for the app reflect in the same way in all devices. It is very crucial to understand the difference between an emulator, simulator, and real devices. Simulators An objective of a simulator is to simulate the state of an object, which is exactly the same state as that of an object. It is preferable that the testing happens when a mobile interacts with some of the natural behavior of the available resources. These are reimplementations of the original software applications that are written, and they are difficult to debug and are mostly writing in high-level languages. Emulators Emulators predominantly aim at replicating the closest possible behavior of mobile devices. These are typically used to test a mobile's behavior internally, such as hardware, software, and firmware updates. These are typically written in machine-level languages and are easy to debug. This is again the reimplementation of the real software. Pros Fast, simple, and little or no price associated Emulators/simulators are quickly available to test the majority of the functionality of the app that is being developed It is very easy to find the defects using emulators and fix issues Cons The risk of false positives is increased; some of the functions or protection may actually not work on a real device. Differences in software and hardware will arise. Some of the emulators might be able to mimic the hardware. However, it may or may not work when it is actually installed on that particular hardware in reality. There's a lack of network interoperability. Since emulators are not really connected to a Wi-Fi or cellular network, it may not be possible to test network-based risks/functions. Real devices Real devices are physical devices that a user will be interacting with. There are pros and cons of real devices too. Pros Lesser false positives: Results are accurate Interoperability: All the test cases are on a live environment User experience: Real user experience when it comes to the CPU utilization, memory, and so on for a provided device Performance: Performance issues can be found quickly with real handsets Cons Costs: There are plenty of OEMs, and buying all the devices is not viable. A slowdown in development: It may not be possible to connect an IDE and than emulators. This will significantly slow down the development process. Other issues: The devices that are locally connected to the workstation will have to ensure that USB ports are open, thus opening an additional entry point. Threats A threat is something that can harm an asset that we are trying to protect. In mobile device security, a threat is a possible danger that might exploit a vulnerability to compromise and cause potential harm to a device. A threat can be defined by the motives; it can be any of the following ones: Intentional: An individual or a group with an aim to break an application and steal information Accidental: The malfunctioning of a device or an application may lead to a potential disclosure of sensitive information Others: Capabilities, circumstantial, and so on Threat agents A threat agent is used to indicate an individual or a group that can manifest a threat. Threat agents will be able to perform the following actions: Access Misuse Disclose Modify Deny access Vulnerability The security weakness within a system that might allow attackers to exploit it and break the security of the device is called a vulnerability. For example, if a mobile device is stolen and it does not have the PIN or pass code enabled, the phone is vulnerable to data theft. Risk The intersection between asset (A), threat (T), and vulnerability (V) is a risk. However, a risk can be included along with the probability (P) of the threat occurrences to provide more value to the business. Risk = A x T x V x P These terms will help us understand the real risk to a given asset. Business will be benefited only if these risks are accurately assessed. Understanding threat, vulnerability, and risk is the first step in threat modeling. For a given application, no vulnerabilities or a vulnerability with no threats is considered to be a low risk. Summary In this article, we saw that mobile devices are susceptible to attacks through various threats, which exist due to the lack of sufficient security measures that can be implemented at various stages of the development of a mobile application. It is necessary to understand how these threats are manifested and learn how to test and mitigate them effectively. Proper knowledge of the underlying architecture and the tools available for the testing of mobile applications will help developers and security testers alike in order to protect end users from attackers who may be attempting to leverage these vulnerabilities.
Read more
  • 0
  • 0
  • 8926

article-image-protocol-extensions
Packt
02 Feb 2016
7 min read
Save for later

Protocol Extensions

Packt
02 Feb 2016
7 min read
In this article by John Hoffman, the author of Protocol Oriented Programming with Swift, you will study the types of protocols that can be extended. Protocol extensions can be used to provide common functionality to all the types that conform to a particular protocol. This gives us the ability to add functionality to any type that conforms to a protocol, rather than adding the functionality to each individual type or though a global function. Protocol extensions, like regular extensions, also give us the ability to add functionality to types that we do not have the source code for. (For more resources related to this topic, see here.) Protocol-Oriented programming would not be possible without protocol extensions. Without protocol extensions, if we wanted to add specific functionality to a group of types that conformed to a protocol, we would have to add the functionality to each of the types. If we were using reference types (classes), we could create a class hierarchy, but this is not possible for value types. Apple has stated that we should prefer value types to reference types, and with protocol-extensions, we have the ability to add common functionality to a group of value and/or reference types that conform to a specific protocol without having to implement that functionality in all the types. Let's take a look at what protocol extensions can do for us. The Swift standard library provides a protocol named CollectionType (documentation here). This protocol inherits from the Indexable and SequenceType protocols and is adopted by all of Swift's standard collection types such as Dictionary and Array. Let's say that we want to add the functionality to types that conform to CollectionType. These would shuffle the items in a collection or return only the items whose index number is an even number. We could very easily add this functionality by extending the CollectionType protocol, as shown in the following code: extension CollectionType { func evenElements() -> [Generator.Element] { var index = self.startIndex var result: [Generator.Element] = [] var i = 0 repeat { if i % 2 == 0 { result.append(self[index]) } index = index.successor() i++ } while (index != self.endIndex) return result } func shuffle() -> [Self.Generator.Element] { return sort(){ left, right in return arc4random() < arc4random() } } } Notice that when we extend a protocol, we use the same syntax and format that we use when we extend other types. We use the extension keyword, followed by the name of the protocol that we extend. We then put the functionality that we add to the protocol between curly brackets. Now, every type that conforms to the CollectionType protocol will receive both the evenElements() and shuffle() functions. The following code shows how we can use these functions with an array: var origArray = [1,2,3,4,5,6,7,8,9,10] var newArray = origArray.evenElements() var ranArray = origArray.shuffle() In the previous code, the newArray array would contain the elements 1, 3, 5, 7, and 9 because these elements have even index numbers (we are looking at the index number, not the value of the element). The ranArray array would contain the same elements as origArray, but the order will be shuffled. Protocol extensions are great to add functionality to a group of types without the need to add the code to each of the individual types; however, it is important to know what types conform to the protocol we extend. In the previous example, we extended the CollectionType protocol by adding the evenElements() and shuffle() methods to all the types that conform to the protocol. One of the types that conform to this protocol is the Dictionary type; however, the Dictionary type is an unordered collection. Therefore, the evenElements() method will not work as expected. The following example illustrates this: var origDict = [1:"One",2:"Two",3:"Three",4:"Four"] var returnElements = origDict.evenElements() for item in returnElements { print(item) } Since the Dictionary type does not promise to store the items in the dictionary in any particular order, any of the two items could be printed to the screen in this example. The following shows one possible output from this code: (2, "two") (1, "One") Another problem is that anyone who is not familiar with how the evenElements() method is implemented may expect the returnElements instance to be a dictionary. This is because the original collection is a dictionary type; however, it is actually an instance of the Array type. This can cause some confusion; therefore, we need to be carful when we extend a protocol to make sure that the functionality we add works as expected for the types that conform to the protocol. In the case of the shuffle() and evenElements() methods, we may have been better served if we added the functionality as an extension directly to the Array type, rather than the CollectionType protocol; however, there is another way. We can add constraints to our extension that will limit the types that receive the functionality defined in an extension. In order for a type to receive the functionality defined in a protocol extension, it must satisfy all the constraints defined within the protocol extension. A constraint is added after the name of the protocol that we extend using the where keyword. The following code shows how we could add a constraint to our CollectionType extension: extension CollectionType where Self: ArrayLiteralConvertible { //Extension code here } In the CollectionType protocol extensions, as shown in the previous example, only types that also conform to the ArrayLiteralConvertible protocol will receive the functionality defined in the extension. Since the Dictionary type does not conform to the ArrayLiteralConvertible protocol, it will not receive the functionality defined within the protocol. We could also use constraints to define that our CollectionType protocol extensions only apply to a collection whose elements conform to a specific protocol. In the next example, we use constraints to make sure that the elements in the collection conform to the Comparable protocol. This may be necessary if the functionality that we add relies on the ability to compare two or more elements in the collection. We could add the constraint like this: extension CollectionType where Generator.Element: Comparable { // Add functionality here } Constraints give us the ability to limit which types receive the functionality defined in the extension. One thing that we need to be careful of is using protocol extensions when we should actually be extending an individual type. Protocol extensions should be used when we want to add a functionality to a group of types. If we try to add the functionality to a single type, we should look at extending this individual type. We created a series of protocols that defined the Tae Kwon Do testing areas. Let's take a look at how we can extend the TKDRank protocol from this example to add the ability to store which testing areas the student passed and the areas in which they failed. The following code is for the original TKDRank protocol: protocol TKDRank { var color: TKDBeltColors {get} var rank: TKDColorRank {get} } We will begin by adding an instance of the Dictionary type to our protocol. This dictionary type will store the results of our tests. The following example shows what the new TKDRank protocol will look like: protocol TKDRank { var color: TKDBeltColors {get} var rank: TKDColorRank {get} var passFailTests: [String:Bool] {get set} } We can now extend the TKDRank protocol to add a method that we can use to set in instances where the student passes or fails individual tests. The following code shows how we can do this: extension TKDRank { mutating func setPassFail(testName: String, pass: Bool) { passFailTests[testName] = pass } Now, any type that conforms to the TKDRank protocol will have the setPassFail() method automatically. Since we have seen how to use extensions and protocol extensions, let's take a look at a real-world example. In this example, we will explore ways in which we can create a text validation framework. Summary In this article, we looked at an extension. In the original version of Swift, we were able to use extensions to extend structures, classes, and enumerations, but starting with Swift 2, we are able to use extensions extend protocols as well. Without protocol extensions, protocol-oriented programming would not be possible, but we need to make sure that we use protocol extensions where appropriate, and do not try to use them in place of regular extensions. Resources for Article: Further resources on this subject: The Swift Programming Language [article] Your First Swift App [article] Using Protocols and Protocol Extensions [article]
Read more
  • 0
  • 0
  • 2166
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-started-jupyter-notebook-part-2
Marin Gilles
02 Feb 2016
5 min read
Save for later

Getting started with the Jupyter notebook (part 2)

Marin Gilles
02 Feb 2016
5 min read
As seen in the first part of this introduction, you can do a lot of things with the basic capabilities of the Jupyter notebook. But it offers even more possibilities and options, allowing users to create beautiful, interactive documents. Cells manipulation When writing your notebook, you will want to use more advanced cell manipulation. Thankfully, the notebook allows you to manipulate a variety of operations on your cells. You can delete a cell by selecting the desired cell, then going to Edit -> Delete cell, you can move cells by going to Edit -> Move cell [up | down], or you can cut the cell and paste it by going Edit -> Cut Cell then Edit -> Paste Cell ..., selecting the pasting style you need. You can also merge cells, by going to Edit -> Merge cell [above|below], if you find that you have so many cells that you execute only once, or if you want a big chunk of code to be executed in a single sweep. Keep these commands in mind when writing your notebook -- they will save you a lot of time. Markdown cells advanced usage Let's start by exploring the markdown cell type a little more. Even though it says markdown, this type of cell also accepts HTML code. Using this, you can create more advanced styling within your cell, add images and so on. For example, if you want to add the Jupyter logo to your notebook, with a size of 100px by 100px, on the left of the cell: <img src="http://blog.jupyter.org/content/images/2015/02/jupyter-sq-text.png" style="width:100px;height:100px;float:left"> This provides the following after the cell evaluation: Let’s end with the capabilities of the markdown cells: they also support LaTeX syntax. Write your equations in a markdown cell, evaluate the cell, and look at the result. By evaluating this equation: $$int_0^{+infty} x^2 dx$$ You get the LaTeX equation: Export capabilities Another powerful feature of the notebook is the export capability. Indeed, you can write your notebook (an illustrated coding course, for example) and export it in multiple formats such as: HTML Markdown ReST PDF (Through LaTeX) Raw Python By exporting to PDF, you can then create a beautiful document using LaTeX without even using LaTeX! Or you can publish your notebook as a page on your personnal website. You can even write documentation for libraries by exporting to ReST. Matplotlib integration If you have ever done plotting using Python, then you probably know about matplotlib. Matplotlib is a Python library used to create beautiful plots. And it really shines when used with the Jupyter notebook. Let's start exploring what can be done with it. To get started using matplotlib in the Jupyter notebook, you need to tell Jupyter to get all images generated by matplotlib and include them in the notebook. To do that, you just evaluate: %matplotlib inline It might take a few seconds to run, but you only need to do this once when you start your notebook. Let's make a plot and see how this integration works: import matplotlib.pyplot as plt import numpy as np x = np.arange(20) y = x**2 plt.plot(x, y) This simple code will just plot the equation y=x^2. When you evaluate the cell, you get: As you can see here, the plot was added directly into the notebook, just after the code. We can then change our code, reevaluate, and the image will be updated on the fly. This is a nice feature for every data scientist wanting to have their code and images in the same file, letting them know which code does what exactly. Being able to add some more text to the document is also a great help. Non local kernel The Jupyter notebook is built in such a way that it is very easy to start Jupyter from a computer, and allow multiple people to connect to the same Jupyter instance through network. Did you notice, in the previous part of this introduction, the following sentence during Jupyter startup: The IPython Notebook is running at: http://localhost:8888/ This means that your notebook is running locally, on your computer, and that you can access it through a browser at the address http://localhost:8888/. It is possible to make the notebook publicly available by changing the configuration. This will allow anyone with the address to connect to this notebook and make modifications to the notebooks remotely, through their Internet browser. The end word As we have seen in these two parts, the Jupyter notebook is a very powerful tool, allowing users to create beautiful documents for data exploration, education, documentation, and, actually, everything you can think of. Don't hesitate to explore more of its possibilities, and give feedback to the developers if you ever run into trouble, or even if you just want to thank them. About the author Marin Gilles is a PhD student in Physics in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or IPython.
Read more
  • 0
  • 3
  • 7170

article-image-getting-started-jupyter-notebook-part-1
Marin Gilles
02 Feb 2016
5 min read
Save for later

Getting started with the Jupyter notebook (part 1)

Marin Gilles
02 Feb 2016
5 min read
The Jupyter notebook (previously known as IPython notebooks) is an interactive notebook, in which you can run code from more than 40 programming languages. In this introduction, we will explore the main features of the Jupyter notebook and see why it can be such a poweful tool for anyone wanting to create beautiful interactive documents and educational resources. To start this working with the notebook, you will need to install it. You can find the full procedure on the Jupyter website. jupyter notebook You will see something similar to the following displayed : [I 20:06:36.367 NotebookApp] Writing notebook server cookie secret to /run/user/1000/jupyter/notebook_cookie_secret [I 20:06:36.813 NotebookApp] Serving notebooks from local directory: /home/your_username [I 20:06:36.813 NotebookApp] 0 active kernels [I 20:06:36.813 NotebookApp] The IPython Notebook is running at: http://localhost:8888/ [I 20:06:36.813 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). And the main Jupyter window should open in the folder you started the notebook (usually your user folder). The main window looks like this : To create a new notebook, simply click on New and choose the kind of notebook you wish to start under the notebooks section. I only have a Python kernel installed locally, so I will start a Python notebook to start working with. A new tab opens, and I get the notebook interface, completely empty. You can see different parts of the notebook: Name of your notebook Main toolbar, with options to save your notebook, export, reload, un notebook[RB1] , restart kernel, etc. Shortcuts The main part of the notebook, containing the contents of your notebook Take the time to explore the menus and see your options. If you need help on a very specific subject, concerning the notebook or some libraries, you can try the help menu at the right end of the menu bar. In the main area, you can see what is called a cell. Each notebook is composed of multiple cells, and each cell will be used for a different purpose. The first cell we have here, starting with In [ ] is a code cell. In this type of cell, you can type any code and execute it. For example, try typing 1 + 2 then hit Shift + Enter. When hitting Shift + Enter, the code in the cell will be evaluated, you will be placed in a new cell, and you will get the following: You can easily identify the current working cell thanks to the green outline. Let's type something else in the second cell, for example: for i in range(5): print(i) When evaluating this cell, you then get: As previously, the code is evaluated and the results are dislayed properly. You may notice there is no Out[2] this time. This is because we printed the results, and no value was returned. One very interesting feature of the notebook is that you can go back in a cell, change it and reevaluate it again, thus updating your whole document. Try this by going back to the first cell, changing 1 + 2 to 2 + 3, and reevaulating the cell, by pressing Shift + Enter. You will notice the result was updated to 5 as soon as you evaluated the cell. This can be very powerful when you want to explore data or test an equation with different parameters without having to reevaluate your whole script. You can, however, reevaluate the whole notebook at once, by going to Cell -> Run all. Now that we’ve seen how to enter code, why not try and get a more beautiful and explanatory notebook? To do this, we will use other types of cells, the Header and Markdown cells. First, let's add a title to our notebook at the very top. To do that, select the first cell, then click Insert -> Insert cell above. As you can see, a new cell was added at the very top of your document. However, this looks exactly like the previous one. Let's make it a title cell by clicking on the cell type menu, in the shortcut toolbar: You then change it to Heading. A pop-up will be displayed explaining how to create different levels of titles, and you will be left with a different type of cell: This cell starts with a # sign, meaning this is a level one title. If you want to make subtitles, you can just use the following notation (explained in the pop-up showing up when changing the cell type): # : First level title ## : Second level title ### : Third level title ... Write your title after the #, then evaluate the cell. You will see the display changing to show a very nice looking title. I added a few other title cells as an example, and an exercise for you: After adding our titles, let's add a few explanations about what we do in each code cell. For this, we will add a cell where we want it to be, and then change its type to Markdown. Then, evaluate your cell. That's it: your text is displayed beautifully! To finish this first introduction, you can rename your notebook by going to File -> Rename and inputing the new name of your notebook. It will then be displayed on the top left of your window, next to the Jupyter logo. In the next part of this introduction, we will go deeper in the capabilities of the notebook and the integration with other Python libraries. Click here to carry on reading now! About the author Marin Gilles is a PhD student in Physics, in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or IPython.
Read more
  • 0
  • 0
  • 33905

article-image-cluster-basics-and-installation-centos-7
Packt
01 Feb 2016
8 min read
Save for later

Cluster Basics and Installation On CentOS 7

Packt
01 Feb 2016
8 min read
In this article by Gabriel A. Canepa, author of the book CentOS High Performance, we will review the basic principles of clustering and show you, step by step, how to set up two CentOS 7 servers as nodes to later use them as members of a cluster. (For more resources related to this topic, see here.) As part of this process, we will install CentOS 7 from scratch in a brand new server as our first cluster member, along with the necessary packages, and finally, configure key-based authentication for SSH access from one node to the other. Clustering fundamentals In computing, a cluster consists of a group of computers (which are referred to as nodes or members) that work together so that the set is seen as a single system from the outside. One typical cluster setup involves assigning a different task to each node, thus achieving a higher performance than if several tasks were performed by a single member on its own. Another classic use of clustering is helping to ensure high availability by providing failover capabilities to the set, where one node may automatically replace a failed member to minimize the downtime of one or several critical services. In either case, the concept of clustering implies not only taking advantage of the computing functionality of each member alone, but also maximizing it by complementing it with the others. As we just mentioned, HA (High-availability) clusters aim to eliminate system downtime by failing services from one node to another in case one of them experiences an issue that renders it inoperative. As opposed to switchover, which requires human intervention, a failover procedure is performed automatically by the cluster without any downtime. In other words, this operation is transparent to end users and clients from outside the cluster. On the other hand, HP (High-performance) clusters use their nodes to perform operations in parallel in order to enhance the performance of one or more applications. High-performance clusters are typically seen in scenarios involving applications that use large collections of data. Why CentOS? Just as the saying goes, Every journey begins with a small step, we will begin our own journey toward clustering by setting up the separate nodes that will make up our system. Our choice of operating system is Linux and CentOS, version 7, as the distribution, that being the latest available release of CentOS as of today. The binary compatibility with Red Hat Enterprise Linux © (which is one of the most well-used distributions in enterprise and scientific environments) along with its well-proven stability are the reasons behind this decision. CentOS 7 along with its previous versions of the distribution are available for download, free of charge, from the project's website at http://www.centos.org/. In addition, specific details about the release can always be consulted in the CentOS wiki, http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7. Among the distinguishing features of CentOS 7, I would like to name the following: It includes systemd as the central system management and configuration utility It uses XFS as the default filesystem It only supports the x86_64 architecture Downloading CentOS To download CentOS, go to http://www.centos.org/download/ and click on one of the three options outlined in the following figure: Download options for CentOS 7 These options are detailed as follows: DVD ISO (~4 GB) is an .iso file that can be burned into regular DVD optical media and includes the common tools. Download this file if you have immediate access to a reliable Internet connection that you can use to download other packages and utilities. Everything ISO (~7 GB) is an .iso file with the complete set of packages that are made available in the base repository of CentOS 7. Download this file if you do not have access to a reliable Internet connection or if your plan contemplates the possibility of installing or populating a local or network mirror. The alternative downloads link will take you to a public directory within an official nearby CentOS mirror, where the previous options are available as well as others, including different choices of desktop versions (GNOME or KDE) and the minimal .iso file (~570 MB), which contains the bare bone packages of the distribution. As the minimal install is sufficient for our purpose at hand, we can install other needed packages using yum later, that is, the recommended .iso file to download. CentOS-7.X-YYMM-x86_64-Minimal.iso Here, X indicates the current update number of CentOS 7 and YYMM represent the year and month, both in two-digit notation, when the source code this version is based on was released. CentOS-7.0-1406-x86_64-Minimal.iso This tells us the source code this release is based on dates from the month of June, 2014. Independently of our preferred download method, we will need this .iso file in order to begin with the installation. In addition, feel free to burn it to optical media or a USB drive. Setting up CentOS 7 nodes If you do not have dedicated hardware that you can use to set up the nodes of your cluster, you can still create one using virtual machines over some virtualization software, such as Oracle Virtualbox © or VMware ©, for example. The following setup is going to be performed on a Virtualbox VM with 1 GB of RAM and 30 GB of disk space. We will use the default partitioning schema over LVM as suggested by the installation process. Installing CentOS 7 The splash screen shown in the following screenshot is the first step in the installation process. Highlight Install CentOS 7 using the up and down arrows and press Enter: Splash screen before starting the installation of CentOS 7 Select English (or your preferred installation language) and click on Continue, as shown in the following screenshot: Selecting the language for the installation of CentOS 7 In the following screenshot, you can choose a keyboard layout, set the current date and time, choose a partitioning method, connect the main network interface, and assign a unique hostname for the node. We will name the current node node01 and leave the rest of the settings as default (we will configure the extra network card later). Then, click on Begin installation: Configure keyboard layout, date and time, network and hostname, and partitioning schema While the installation continues in the background, we will be prompted to set the password for the root account and create an administrative user for the node. Once these steps have been confirmed, the corresponding warnings no longer appear, as shown in the following screenshot: Setting the password for root and creating an administrative user account When the process is completed, click on Finish configuration and the installation will finish configuring the system and devices. When the system is ready to boot on its own, you will be prompted to do so. Remove the installation media and click on Reboot. Now, we can proceed with setting up our network interfaces. Setting up the network infrastructure Our rather basic network infrastructure consists of 2 CentOS 7 boxes, with the node01 [192.168.0.2] and node02 [192.168.0.3] host names, respectively, and a gateway router called simply gateway [192.168.0.1]. In CentOS, network cards are configured using scripts in the /etc/sysconfig/network-scripts directory. This is the minimum content that is needed in /etc/sysconfig/network-scripts/ifcfg-enp0s3 for our purposes: HWADDR="08:00:27:C8:C2:BE" TYPE="Ethernet" BOOTPROTO="static" NAME="enp0s3" ONBOOT="yes" IPADDR="192.168.0.2" NETMASK="255.255.255.0" GATEWAY="192.168.0.1" PEERDNS="yes" DNS1="8.8.8.8" DNS2="8.8.4.4" Note that the UUID and HWADDR values will be different in your case. In addition, be aware that cluster machines need to be assigned a static IP address—never leave that up to DHCP! In the preceding configuration file, we used Google's DNS, but if you wish, feel free to use another DNS. When you're done making changes, save the file and restart the network service in order to apply them: systemctl restart network.service # Restart the network service You can verify that the previous changes have taken effect (shown in the Restarting the network service and verifying settings figure) with the following two commands: systemctl status network.service # Display the status of the network service And the changes have also taken effect due to this command: ip addr | grep 'inet addr' # Display the IP addresse Restarting the network service and verifying settings You can disregard all error messages related to the loopback interface, as shown in preceding screenshot. However, you will need to examine carefully any error messages related to the enp0s3 interface, if any, and get them resolved in order to proceed further. The second interface will be called enp0sX, where X is typically 8. You can verify with the following command (shown in the following figure): ip link show Displaying NIC information As for the configuration file of enp0s8, you can safely create it, copying the contents of ifcfg-enp0s3. Do not forget, however, to change the hardware (MAC) address as returned by the information on the NIC and leave the IP address field blank for now. ip link show enp0s8 cp /etc/sysconfig/network-scripts/ifcfg-enp0s3 /etc/sysconfig/network-scripts/ifcfg-enp0s8 Then, restart the network service. Note that you will also need to set up at least a basic DNS resolution method. Considering that we will set up a cluster with 2 nodes only, we will use /etc/hosts for this purpose. Edit /etc/hosts with the following content: 192.168.0.2 node01 192.168.0.3 node02 192.168.0.1 gateway Summary In this article, we reviewed how to install the operating system and listed the necessary software components to implement the basic cluster functionality. Resources for Article: Further resources on this subject: CentOS 7's new firewalld service[article] Mastering CentOS 7 Linux Server[article] Resource Manager on CentOS 6[article]
Read more
  • 0
  • 0
  • 19543

article-image-scenes-and-menus
Packt
01 Feb 2016
19 min read
Save for later

Scenes and Menus

Packt
01 Feb 2016
19 min read
In this article by Siddharth Shekar, author of the book Cocos2d Cross-Platform Game Development Cookbook, Second Edition, we will cover the following recipes: Adding level selection scenes Scrolling level selection scenes (For more resources related to this topic, see here.) Scenes are the building blocks of any game. Generally, in any game, you have the main menu scene in which you are allowed to navigate to different scenes, such as GameScene, OptionsScene, and CreditsScene. In each of these scenes, you have menus. Similarly in MainScene, there is a play button that is part of a menu that, when pressed, takes the player to GameScene, where the gameplay code runs. Adding level selection scenes In this section, we will take a look at how to add a level selection scene in which you will have buttons for each level you want to play, and if you select it, this particular level will load up. Getting ready To create a level selection screen, you will need a custom sprite that will show a background image of the button and a text showing the level number. We will create these buttons first. Once the button sprites are created, we will create a new scene that we will populate with the background image, name of the scene, array of buttons, and a logic to change the scene to the particular level. How to do it... We will create a new Cocoa Touch class with CCSprite as the parent class and call it LevelSelectionBtn. Then, we will open up the LevelSelectionBtn.h file and add the following lines of code in it: #import "CCSprite.h" @interface LevelSelectionBtn : CCSprite -(id)initWithFilename:(NSString *) filename   StartlevelNumber:(int)lvlNum; @end We will create a custom init function; in this, we will pass the name of the file of the image, which will be the base of the button and integer that will be used to display the text at the top of the base button image. This is all that is required for the header class. In the LevelSelectionBtn.m file, we will add the following lines of code: #import "LevelSelectionBtn.h" @implementation LevelSelectionBtn -(id)initWithFilename:(NSString *) filename StartlevelNumber: (int)lvlNum; {   if (self = [super initWithImageNamed:filename]) {     CCLOG(@"Filename: %@ and levelNUmber: %d", filename, lvlNum);     CCLabelTTF *textLabel = [CCLabelTTF labelWithString:[NSString       stringWithFormat:@"%d",lvlNum ] fontName:@"AmericanTypewriter-Bold" fontSize: 12.0f];     textLabel.position = ccp(self.contentSize.width / 2, self.contentSize.height / 2);     textLabel.color = [CCColor colorWithRed:0.1f green:0.45f blue:0.73f];     [self addChild:textLabel];   }   return self; } @end In our custom init function, we will first log out if we are sending the correct data in. Then, we will create a text label and pass it in as a string by converting the integer. The label is then placed at the center of the current sprite base image by dividing the content size of the image by half to get the center. As the background of the base image and the text both are white, the color of the text is changed to match the color blue so that the text is actually visible. Finally, we will add the text to the current class. This is all for the LevelSelectionBtn class. Next, we will create LevelSelectionScene, in which we will add the sprite buttons and the logic that the button is pressed for. So, we will now create a new class, LevelSelectionScene, and in the header file, we will add the following lines: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   NSMutableArray *buttonSpritesArray; } +(CCScene*)scene; @end Note that apart from the usual code, we also created NSMutuableArray called buttonsSpritesArray, which will be used in the code. Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionScene +(CCScene*)scene{     return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     //Add Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@ "Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     //add text heading for file     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:@     "LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //initialize array     buttonSpritesArray = [NSMutableArray array];     int widthCount = 5;     int heightCount = 5;     float spacing = 35.0f;     float halfWidth = winSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = winSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = 1;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtnalloc]           initWithFilename:@"btnBG.png"           StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } Here, we will add the background image and heading text for the scene and initialize NSMutabaleArray. We will then create six new variables, as follows: WidthCount: This is the number of columns we want to have heightCount: This is the number of rows we want spacing: This is the distance between each of the sprite buttons so that they don't overlap halfWidth: This is the distance in the x axis from the center of the screen to upper-left position of the first sprite button that will be placed halfHeight: This is the distance in the y direction from the center to the upper-left position of the first sprite button that will be placed lvlNum: This is the counter with an initial value of 1. This is incremented each time a button is created to show the text in the button sprite. In the double loop, we will get the x and y coordinates of each of the button sprites. First, to get the y position from the half height, we will subtract the spacing multiplied by the j counter. As the value of j is initially 0, the y value remains the same as halfWidth for the topmost row. Then, for the x value of the position, we will add half the width of the spacing multiplied by the i counter. Each time, the x position is incremented by the spacing. After getting the x and y position, we will create a new LevelSelectionBtn sprite and pass in the btnBG.png image and also pass in the value of lvlNum to create the button sprite. We will set the position to the value of x and y that we calculated earlier. To refer to the button by number, we will assign the name of the sprite, which is the same as the number of the level. So, we will convert lvlNum to a string and pass in the value. Then, the button will be added to the scene, and it will also be added to the array we created globally as we will need to cycle through the images later. Finally, we will increment the value of lvlNum. However, we have still not added any interactivity to the sprite buttons so that when it is pressed, it will load the required level. For added touch interactivity, we will use the touchBegan function built right into Cocos2d. We will create more complex interfaces, but for now, we will use the basic touchBegan function. In the same file, we will add the following code right between the init function and @end: -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCrossFadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];       self.userInteractionEnabled = false;     }   } } The touchBegan function will be called each time we touch the screen. So, once we touch the screen, it gets the location of where you touched and stores it as a variable called location. Then, using the for in loop, we will loop through all the button sprites we added in the array. Using the RectContainsPoint function, we will check whether the location that we pressed is inside the rect of any of the sprites in the loop. We will then log out so that we will get an indication in the console as to which button number we have clicked on so that we can be sure that the right level is loaded. A crossfade transition is created, and the current scene is swapped with GameplayScene with the name of the current sprite clicked on. Finally, we have to set the userInteractionEnabled Boolean false so that the current class stops listening to the touch. Also, at the top of the class in the init function, we enabled this Boolean, so we will add the following line of code as highlighted in the init function:     if(self = [super init]){       self.userInteractionEnabled = TRUE;       CGSize  winSize = [[CCDirector sharedDirector]viewSize]; How it works... So, we are done with the LevelSelectionScene class, but we still need to add a button in MainScene to open LevelSelectionScene. In MainScene, we will add the following lines in the init function, in which we will add menubtn and a function to be called once the button is clicked on as highlighted here:         CCButton *playBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"playBtn_normal.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@ "playBtn_pressed.png"]           disabledSpriteFrame:nil];         [playBtn setTarget:self selector:@selector(playBtnPressed:)];          CCButton *menuBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           disabledSpriteFrame:nil];          [menuBtn setTarget:self selector:@selector(menuBtnPressed:)];         CCLayoutBox * btnMenu;         btnMenu = [[CCLayoutBox alloc] init];         btnMenu.anchorPoint = ccp(0.5f, 0.5f);         btnMenu.position = CGPointMake(winSize.width/2, winSize.height * 0.5);          btnMenu.direction = CCLayoutBoxDirectionVertical;         btnMenu.spacing = 10.0f;          [btnMenu addChild:menuBtn];         [btnMenu addChild:playBtn];          [self addChild:btnMenu]; Don't forget to include the menuBtn.png file included in the resources folder of the project, otherwise you will get a build error. Next, also add in the menuBtnPressed function, which will be called once menuBtn is pressed and released, as follows: -(void)menuBtnPressed:(id)sender{   CCLOG(@"menu button pressed");   CCTransition *transition = [CCTransition transitionCrossFadeWith Duration:0.20];   [[CCDirector sharedDirector]replaceScene:[[LevelSelectionScene alloc]init] withTransition:transition]; } Now, the MainScene should similar to the following: Click on the menu button below the play button, and you will be able to see LevelSelectionScreen in all its glory. Now, click on any of the buttons to open up the gameplay scene displaying the number that you clicked on. In this case, I clicked on button number 18, which is why it shows 18 in the gameplay scene when it loads. Scrolling level selection scenes If your game has say 20 levels, it is okay to have one single level selection scene to display all the level buttons; but what if you have more? In this section, we will modify the previous section's code, create a node, and customize the class to create a scrollable level selection scene. Getting ready We will create a new class called LevelSelectionLayer, inherit from CCNode, and move all the content we added in LevelSelectionScene to it. This is done so that we can have a separate class and instantiate it as many times as we want in the game. How to do it... In the LevelSelectionLayer.m file, we will change the code to the following: #import "CCNode.h" @interface LevelSelectionLayer : CCNode {   NSMutableArray *buttonSpritesArray; } -(id)initLayerWith:(NSString *)filename   StartlevelNumber:(int)lvlNum   widthCount:(int)widthCount   heightCount:(int)heightCount   spacing:(float)spacing; @end We changed the init function so that instead of hardcoding the values, we can create a more flexible level selection layer. In the LevelSelectionLayer.m file, we will add the following: #import "LevelSelectionLayer.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionLayer - (void)onEnter{   [super onEnter];   self.userInteractionEnabled = YES; } - (void)onExit{   [super onExit];   self.userInteractionEnabled = NO; } -(id)initLayerWith:(NSString *)filename StartlevelNumber:(int)lvlNum widthCount:(int)widthCount heightCount:(int)heightCount spacing: (float)spacing{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     self.contentSize = winSize;     buttonSpritesArray = [NSMutableArray array];     float halfWidth = self.contentSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = self.contentSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = lvlNum;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtn alloc]         initWithFilename:filename StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   CCLOG(@"location: %f, %f", location.x, location.y);   CCLOG(@"touched");   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCross FadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];     }   } } @end The major changes are highlighted here. The first is that we added and removed the touch functionality using the onEnter and onExit functions. The other major change is that we set the contentsize value of the node to winSize. Also, while specifying the upper-left coordinate of the button, we did not use winsize for the center but the contentsize of the node. Let's move to LevelSelectionScene now; we will execute the following code: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   int layerCount;   CCNode *layerNode; } +(CCScene*)scene; @end In the header file, we will change it to add two global variables in it: The layerCount variable keeps the total layers and nodes you add The layerNode variable is an empty node added for convenience so that we can add all the layer nodes to it so that we can move it back and forth instead of moving each layer node individually Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" #import "LevelSelectionLayer.h" @implementation LevelSelectionScene +(CCScene*)scene{   return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     layerCount = 1;     //Basic CCSprite - Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@"Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:     @"LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //empty node     layerNode = [[CCNode alloc]init];     [self addChild:layerNode];     int widthCount = 5;     int heightCount = 5;     float spacing = 35;     for(int i=0; i<3; i++){       LevelSelectionLayer* lsLayer = [[LevelSelectionLayer alloc]initLayerWith:@"btnBG.png"         StartlevelNumber:widthCount * heightCount * i + 1         widthCount:widthCount         heightCount:heightCount         spacing:spacing];       lsLayer.position = ccp(winSize.width * i, 0);       [layerNode addChild:lsLayer];     }     CCButton *leftBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       disabledSpriteFrame:nil];     [leftBtn setTarget:self selector:@selector(leftBtnPressed:)];     CCButton *rightBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       disabledSpriteFrame:nil];     [rightBtn setTarget:self selector:@selector(rightBtnPressed:)];     CCLayoutBox * btnMenu;     btnMenu = [[CCLayoutBox alloc] init];     btnMenu.anchorPoint = ccp(0.5f, 0.5f);     btnMenu.position = CGPointMake(winSize.width * 0.5, winSize.height * 0.2);     btnMenu.direction = CCLayoutBoxDirectionHorizontal;     btnMenu.spacing = 300.0f;     [btnMenu addChild:leftBtn];     [btnMenu addChild:rightBtn];     [self addChild:btnMenu z:4];   }   return self; } -(void)rightBtnPressed:(id)sender{   CCLOG(@"right button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount >=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(-winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount--;   } } -(void)leftBtnPressed:(id)sender{   CCLOG(@"left button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount <=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount++;   } } @end How it works... The important piece of the code is highlighted. Apart from adding the usual background and text, we will initialize layerCount to 1 and initialize the empty layerNode variable. Next, we will create a for loop, in which we will add the three level selection layers by passing the starting value of each selection layer in the btnBg image, the width count, height count, and spacing between each of the buttons. Also, note how the layers are positioned at a width's distance from each other. The first one is visible to the player. The consecutive layers are added off screen similarly to how we placed the second image offscreen while creating the parallax effect. Then, each level selection layer is added to layerNode as a child. We will also create the left-hand side and right-hand side buttons so that we can move layerNode to the left and right once clicked on. We will create two functions called leftBtnPressed and rightBtnPressed in which we will add functionality when the left-hand side or right-hand side button gets pressed. First, let's look at the rightBtnPressed function. Once the button is pressed, we will log out this button. Next, we will get the size of the window. We will then check whether the value of layerCount is greater than zero, which is true as we set the value as 1. We will create a moveBy action, in which we give the window width for the movement in the x direction and 0 for the movement in the y direction as we want the movement to be only in the x direction and not y. Lastly, we will pass in a value of 0.20f. The action is then run on layerNode and the layerCount value is decremented. In the leftBtnPressed function, the opposite is done to move the layer in the opposite direction. Run the game to see the change in LevelSelectionScene. As you can't go left, pressing the left button won't do anything. However, if you press the right button, you will see that the layer scrolls to show the next set of buttons. Summary In this article, we learned about adding level selection scenes and scrolling level selection scenes in Cocos2d. Resources for Article: Further resources on this subject: Getting started with Cocos2d-x [article] Dragging a CCNode in Cocos2D-Swift [article] Run Xcode Run [article]
Read more
  • 0
  • 0
  • 10893
article-image-vertex-functions
Packt
01 Feb 2016
18 min read
Save for later

The Vertex Functions

Packt
01 Feb 2016
18 min read
In this article by Alan Zucconi, author of the book Unity 5.x Shaders and Effects Cookbook, we will see that the term shader originates from the fact that Cg has been mainly used to simulate realistic lighting conditions (shadows) on three-dimensional models. Despite this, shaders are now much more than that. They not only define the way objects are going to look, but also redefine their shapes entirely. If you want to learn how to manipulate the geometry of a three-dimensional object only via shaders, this article is for you. In this article, you will learn the following: Extruding your models Implementing a snow shader Implementing a volumetric explosion (For more resources related to this topic, see here.) In this article, we will explain that 3D models are not just a collection of triangles. Each vertex can contain data, which is essential for correctly rendering the model itself. This article will explore how to access this information in order to use it in a shader. We will also explore how the geometry of an object can be deformed simply using Cg code. Extruding your models One of the biggest problems in games is repetition. Creating new content is a time-consuming task and when you have to face a thousand enemies, the chances are that they will all look the same. A relatively cheap technique to add variations to your models is using a shader that alters its basic geometry. This recipe will show a technique called normal extrusion, which can be used to create a chubbier or skinnier version of a model, as shown in the following image with the soldier from the Unity camp (Demo Gameplay): Getting ready For this recipe, we need to have access to the shader used by the model that you want to alter. Once you have it, we will duplicate it so that we can edit it safely. It can be done as follows: Find the shader that your model is using and, once selected, duplicate it by pressing Ctrl+D. Duplicate the original material of the model and assign the cloned shader to it. Assign the new material to your model and start editing it. For this effect to work, your model should have normals. How to do it… To create this effect, start by modifying the duplicated shader as shown in the following: Let's start by adding a property to our shader, which will be used to modulate its extrusion. The range that is presented here goes from -1 to +1;however, you might have to adjust that according to your own needs, as follows: _Amount ("Extrusion Amount", Range(-1,+1)) = 0 Couple the property with its respective variable, as shown in the following: float _Amount; Change the pragma directive so that it now uses a vertex modifier. You can do this by adding vertex:function_name at the end of it. In our case, we have called the vertfunction, as follows: #pragma surface surf Lambert vertex:vert Add the following vertex modifier: void vert (inout appdata_full v) { v.vertex.xyz += v.normal * _Amount; } The shader is now ready; you can use the Extrusion Amount slider in the Inspectormaterial to make your model skinnier or chubbier. How it works… Surface shaders works in two steps: the surface function and the vertex modifier. It takes the data structure of a vertex (which is usually called appdata_full) and applies a transformation to it. This gives us the freedom to virtually do everything with the geometry of our model. We signal the graphics processing unit(GPU) that such a function exists by adding vertex:vert to the pragma directive of the surface shader. One of the most simple yet effective techniques that can be used to alter the geometry of a model is called normal extrusion. It works by projecting a vertex along its normal direction. This is done by the following line of code: v.vertex.xyz += v.normal * _Amount; The position of a vertex is displaced by the_Amount units toward the vertex normal. If _Amount gets too high, the results can be quite unpleasant. However, you can add lot of variations to your modelswith smaller values. There's more… If you have multiple enemies and you want each one to have theirown weight, you have to create a different material for each one of them. This is necessary as thematerials are normally shared between models and changing one will change all of them. There are several ways in which you can do this; the quickest one is to create a script that automatically does it for you. The following script, once attached to an object with Renderer, will duplicate its first material and set the _Amount property automatically, as follows: using UnityEngine; publicclassNormalExtruder : MonoBehaviour { [Range(-0.0001f, 0.0001f)] publicfloat amount = 0; // Use this for initialization void Start () { Material material = GetComponent<Renderer>().sharedMaterial; Material newMaterial = new Material(material); newMaterial.SetFloat("_Amount", amount); GetComponent<Renderer>().material = newMaterial; } } Adding extrusion maps This technique can actually be improved even further. We can add an extra texture (or using the alpha channel of the main one) to indicate the amount of the extrusion. This allows a better control over which parts are raised or lowered. The following code shows how it is possible to achieve such an effect: sampler2D _ExtrusionTex; void vert(inout appdata_full v) { float4 tex = tex2Dlod (_ExtrusionTex, float4(v.texcoord.xy,0,0)); float extrusion = tex.r * 2 - 1; v.vertex.xyz += v.normal * _Amount * extrusion; } The red channel of _ExtrusionTex is used as a multiplying coefficient for normal extrusion. A value of 0.5 leaves the model unaffected; darker or lighter shades are used to extrude vertices inward or outward, respectively. You should notice that to sample a texture in a vertex modifier, tex2Dlod should be used instead of tex2D. In shaders, colour channels go from 0 to 1.Although, sometimes, you need to represent negative values as well (such as inward extrusion). When this is the case, treat 0.5 as zero; having smaller values as negative and higher values as positive. This is exactly what happens with normals, which are usually encoded in RGB textures. The UnpackNormal()function is used to map a value in the (0,1) range on the (-1,+1)range. Mathematically speaking, this is equivalent to tex.r * 2 -1. Extrusion maps are perfect to zombify characters by shrinking the skin in order to highlight the shape of the bones underneath. The following image shows how a "healthy" soldier can be transformed into a corpse using a shader and an extrusion map. Compared to the previous example, you can notice how the clothing is unaffected. The shader used in the following image also darkens the extruded regions in order to give an even more emaciated look to the soldier:   Implementing a snow shader The simulation of snow has always been a challenge in games. The vast majority of games simply baked snow directly in the models textures so that their tops look white. However, what if one of these objects starts rotating? Snow is not just a lick of paint on a surface; it is a proper accumulation of material and it should be treated as so. This recipe will show how to give a snowy look to your models using just a shader. This effect is achieved in two steps. First, a white colour is used for all the triangles facing the sky. Second, their vertices are extruded to simulate the effect of snow accumulation. You can see the result in the following image:   Keep in mind that this recipe does not aim to create photorealistic snow effect. It provides a good starting point;however, it is up to an artist to create the right textures and find the right parameters to make it fit your game. Getting ready This effect is purely based on shaders. We will need to do the following: Create a new shader for the snow effect. Create a new material for the shader. Assign the newly created material to the object that you want to be snowy. How to do it… To create a snowy effect, open your shader and make the following changes: Replace the properties of the shader with the following ones: _MainColor("Main Color", Color) = (1.0,1.0,1.0,1.0) _MainTex("Base (RGB)", 2D) = "white" {} _Bump("Bump", 2D) = "bump" {} _Snow("Level of snow", Range(1, -1)) = 1 _SnowColor("Color of snow", Color) = (1.0,1.0,1.0,1.0) _SnowDirection("Direction of snow", Vector) = (0,1,0) _SnowDepth("Depth of snow", Range(0,1)) = 0 Complete them with their relative variables, as follows: sampler2D _MainTex; sampler2D _Bump; float _Snow; float4 _SnowColor; float4 _MainColor; float4 _SnowDirection; float _SnowDepth; Replace the Input structure with the following: struct Input { float2 uv_MainTex; float2 uv_Bump; float3 worldNormal; INTERNAL_DATA }; Replace the surface function with the following one. It will color the snowy parts of the model white: void surf(Input IN, inout SurfaceOutputStandard o) { half4 c = tex2D(_MainTex, IN.uv_MainTex); o.Normal = UnpackNormal(tex2D(_Bump, IN.uv_Bump)); if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; o.Alpha = 1; } Configure the pragma directive so that it uses a vertex modifiers, as follows: #pragma surface surf Standard vertex:vert Add the following vertex modifiers that extrudes the vertices covered in snow, as follows: void vert(inout appdata_full v) { float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; } You can now use the Inspectormaterial to select how much of your mode is going to be covered and how thick the snow should be. How it works… This shader works in two steps. Coloring the surface The first one alters the color of the triangles thatare facing the sky. It affects all the triangles with a normal direction similar to _SnowDirection. Comparing unit vectors can be done using the dot product. When two vectors are orthogonal, their dot product is zero; it is one (or minus one) when they are parallel to each other. The _Snowproperty is used to decide how aligned they should be in order to be considered facing the sky. If you look closely at the surface function, you can see that we are not directly dotting the normal and the snow direction. This is because they are usually defined in a different space. The snow direction is expressed in world coordinates, while the object normals are usually relative to the model itself. If we rotate the model, its normals will not change, which is not what we want. To fix this, we need to convert the normals from their object coordinates to world coordinates. This is done with the WorldNormalVector()function, as follows: if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; This shader simply colors the model white; a more advanced one should initialize the SurfaceOutputStandard structure with textures and parameters from a realistic snow material. Altering the geometry The second effect of this shader alters the geometry to simulate the accumulation of snow. Firstly, we identify the triangles that have been coloured white by testing the same condition used in the surface function. This time, unfortunately, we cannot rely on WorldNormalVector()asthe SurfaceOutputStandard structure is not yet initialized in the vertex modifier. We will use this other method instead, which converts _SnowDirection in objectcoordinates, as follows: float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); Then, we can extrude the geometry to simulate the accumulation of snow, as shown in the following: if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; Once again, this is a very basic effect. One could use a texture map to control the accumulation of snow more precisely or to give it a peculiar, uneven look. See also If you need high quality snow effects and props for your game, you can also check the following resources in the Asset Storeof Unity: Winter Suite ($30): A much more sophisticated version of the snow shader presented in this recipe can be found at: https://www.assetstore.unity3d.com/en/#!/content/13927 Winter Pack ($60): A very realistic set of props and materials for snowy environments are found at: https://www.assetstore.unity3d.com/en/#!/content/13316 Implementing a volumetric explosion The art of game development is a clever trade-off between realism and efficiency. This is particularly true for explosions; they are at the heart of many games, yet the physics behind them is often beyond the computational power of modern machines. Explosions are essentially nothing more than hot balls of gas; hence, the only way to correctly simulate them is by integrating a fluid simulation in your game. As you can imagine, this is infeasible for runtime applications and many games simply simulate them with particles. When an object explodes, it is common to simply instantiate many fire, smoke, and debris particles that can have believableresulttogether. This approach, unfortunately, is not very realistic and is easy to spot. There is an intermediate technique that can be used to achieve a much more realistic effect: the volumetric explosions. The idea behind this concept is that the explosions are not treated like a bunch of particlesanymore; they are evolving three-dimensional objects and not just flat two-dimensionaltextures. Getting ready Start this recipe with the following steps: Create a new shader for this effect. Create a new material to host the shader. Attach the material to a sphere. You can create one directly from the editor bynavigating to GameObject | 3D Object | Sphere. This recipe works well with the standard Unity Sphere;however, if you need big explosions, you might need to use a more high-poly sphere. In fact, a vertex function can only modify the vertices of a mesh. All the other points will be interpolated using the positions of the nearby vertices. Fewer vertices mean lower resolution for your explosions. For this recipe, you will also need a ramp texture that has, in a gradient, all the colors that your explosions will have. You can create the following texture using GIMP or Photoshop. The following is the one used for this recipe: Once you have the picture, import it to Unity. Then, from its Inspector, make sure the Filter Mode is set to Bilinear and the Wrap Mode to Clamp. These two settings make sure that the ramp texture is sampled smoothly. Lastly, you will need a noisy texture. You can find many of them on the Internet as freely available noise textures. The most commonly used ones are generated using Perlin noise. How to do it… This effect works in two steps: a vertex function to change the geometry and a surface function to give it the right color. The steps are as follows: Add the following properties for the shader: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Add their relative variables so that the Cg code of the shader can actually access them, as follows: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Change the Input structure so that it receives the UV data of the ramp texture, as shown in the following: struct Input { float2 uv_NoiseTex; }; Add the following vertex function: void vert(inout appdata_full v) { float3 disp = tex2Dlod(_NoiseTex, float4(v.texcoord.xy,0,0)); float time = sin(_Time[3] *_Period + disp.r*10); v.vertex.xyz += v.normal * disp.r * _Amount * time; } Add the following surface function: void surf(Input IN, inout SurfaceOutput o) { float3 noise = tex2D(_NoiseTex, IN.uv_NoiseTex); float n = saturate(noise.r + _RampOffset); clip(_ClipRange - n); half4 c = tex2D(_RampTex, float2(n,0.5)); o.Albedo = c.rgb; o.Emission = c.rgb*c.a; } We will specify the vertex function in the pragma directive, adding the nolightmapparameter to prevent Unity from adding realistic lightings to our explosion, as follows: #pragma surface surf Lambert vertex:vert nolightmap The last step is to select the material and attaching the two textures in the relative slotsfrom its inspector. This is an animated material, meaning that it evolves over time. You can watch the material changing in the editor by clicking on Animated Materials from the Scene window: How it works If you are reading this recipe, you are already familiar with how surface shaders and vertex modifiers work. The main idea behind this effect is to alter the geometry of the sphere in a seemingly chaotic way, exactly like it happens in a real explosion. The following image shows how such explosion will look in the editor. You can see that the original mesh has been heavily deformed in the following image: The vertex function is a variant of the technique called normal extrusion. The difference here is that the amount of the extrusion is determined by both the time and the noise texture. When you need a random number in Unity, you can rely on the Random.Range()function. There is no standard way to get random numbers within a shader, therefore,the easiest way is to sample a noise texture. There is no standard way to do this, therefore, take the following only as an example: float time = sin(_Time[3] *_Period + disp.r*10); The built-in _Time[3]variable is used to get the current time from the shader and the red channel of the disp.rnoise texture is used to make sure that each vertex moves independently. The sin()function makes the vertices go up and down, simulating the chaotic behavior of an explosion. Then, the normal extrusion takes place as shown in the following: v.vertex.xyz += v.normal * disp.r * _Amount * time; You should play with these numbers and variables until you find a pattern of movement that you are happy with. The last part of the effect is achieved by the surface function. Here, the noise texture is used to sample a random color from the ramp texture. However, there are two more aspects that are worth noticing. The first one is the introduction of _RampOffset. Its usage forces the explosion to sample colors from the left or right side of the texture. With positive values, the surface of the explosion tends to show more grey tones— which is exactly what happens when it is dissolving. You can use _RampOffset to determine how much fire or smoke should be there in your explosion. The second aspect introduced in the surface function is the use of clip(). Theclip()function clips (removes) pixels from the rendering pipeline. When invoked with a negative value, the current pixel is not drawn. This effect is controlled by _ClipRange, which determines the pixels of the volumetric explosions that are going to be transparent. By controlling both _RampOffset and _ClipRange, you have full control to determine how the explosion behaves and dissolves. There's more… The shader presented in this recipe makes a sphere look like an explosion. If you really want to use it, you should couple it with some scripts in order to get the most out of it. The best thing to do is to create an explosion object and turn it to a prefab so that you can reuse it every time you need. You can do this by dragging the sphere back in the Project window. Once it is done, you can create as many explosions as you want using the Instantiate() function. However,it is worth noticing that all the objects with the same material share the same look. If you have multiple explosions at the same time, they should not use the same material. When you are instantiating a new explosion, you should also duplicate its material. You can do this easily with the following piece of code: GameObject explosion = Instantiate(explosionPrefab) as GameObject; Renderer renderer = explosion.GetComponent<Renderer>(); Material material = new Material(renderer.sharedMaterial); renderer.material = material; Lastly, if you are going to use this shader in a realistic way, you should attach a script to it, which changes its size—_RampOffsetor_ClipRange—accordingly to the type of explosion you want to recreate. See also A lot more can be done to make explosions realistic. The approach presented in this recipe only creates an empty shell; the explosion in it is actually empty. An easy trick to improve it is to create particles in it. However, you can only go so far with this. The short movie,The Butterfly Effect (http://unity3d.com/pages/butterfly), created by Unity Technologies in collaboration with Passion Pictures and Nvidia, is the perfect example. It is based on the same concept of altering the geometry of a sphere;however, it renders it with a technique called volume ray casting. In a nutshell, it renders the geometry as if it's complete. You can see the following image as an example:   If you are looking for high quality explosions, refer toPyro Technix (https://www.assetstore.unity3d.com/en/#!/content/16925) on the Asset Store. It includes volumetric explosions and couples them with realistic shockwaves. Summary In this article, we saw the recipes to extrude models and implement a snow shader and volumetric explosion. Resources for Article: Further resources on this subject: Lights and Effects [article] Looking Back, Looking Forward [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 34864

article-image-auditing-and-e-discovery
Packt
01 Feb 2016
17 min read
Save for later

Auditing and E-discovery

Packt
01 Feb 2016
17 min read
In this article by Biswanath Banerjee, the author of the book Microsoft Exchange Server PowerShell Essentials, we are going to discuss about the new features in Exchange 2013 and 2016 release that will help organizations meet their compliance and E-discovery requirements. Let's learn about the Auditing and E-discovery features available in Exchange 2013 and online. (For more resources related to this topic, see here.) The following topics will be covered in this Article:- New features in Exchange 2016 The In-place hold Retrieving and exporting emails for Auditing Retrieving content using KQL queries Searching and removing emails from the server Enabling Auditing and understanding its usage Writing a basic script Now, let's review different features in Exchange 2013 and 2016 that can be used by organizations to meet their compliance requirements: The In-place hold: In Exchange 2010, when a mailbox is enabled for a feature called the Litigation hold, all mailbox data will be stored until the hold is removed. With Exchange 2013 and 2016 release, the In-place hold allows the Administrators granularity compared to the Litigation hold feature in Exchange 2010. Now, administrators can choose what to hold and for how long the hold to work. In Place E-Discovery: In Exchange 2010, when you run a discovery search, it will copy the items matching the searched criteria into a discovery mailbox from which you can export it to a PST file or provide access to a group of people. In Exchange, when you run the discovery search, you can see the results live from your search. You will also get an option to create a saved search to be used later with minor modifications if required. Audit logs: In Exchange 2013 and 2016, you can enable two types of audit logging: Administrator audit logs: Administrator audit logs will record any action performed by the administrator using tools such as Exchange Admin Center and Exchange management shell Mailbox Audit logs: Mailbox audit logs can be enabled for individual mailboxes and will store the log entries in their Recoverable items audits subfolder The In-Place hold The Exchange 2013 and 2016 release allows admins to create granular hold policies by allowing them to preserve items in the mailbox using the following scenarios: Indefinite hold: This feature is called Litigation hold in Exchange 2010, and it allows mailbox items to be stored indefinitely. The items in this case are never deleted. It can be used where a group of users are working on some highly sensitive content that might need a review later. The following example sets the mailbox for Amy Alberts on the Litigation hold (in Exchange 2010) for indefinite hold: Set-Mailbox -Identity amya -LitigationHoldEnabled $True In Exchange 2013 and 2016, you will need to use the New-MailboxSearch cmdlet without any parameters as shown next to get the same results: New-MailboxSearch "Amy mailbox hold" -SourceMailboxes "amya@contoso.com" -InPlaceHoldEnabled $True The same can be achieved using the In-place E-discovery and hold wizard in Exchange Admin Center as shown in the following screenshot:  The Query-based hold: Using this, you can specify keywords, date, message types, and recipient addresses, and only the one's specified in the query will be stored. This is useful if you don't want to enable all your mailboxes for indefinite hold. The Time-based hold: This will allow admins to hold items during a specific period. The duration is calculated from the date and time the item is received or created. The following example creates a Query-based and Time-based In-place hold for all the mailboxes that are part of the distribution group Group-Finance and hold every e-mail, meeting, or IM that contains the keywords Merger and Acquisition for 2 years: New-MailboxSearch "Acquisition-Merger" -SourceMailboxes Group-Finance -InPlaceHoldEnabled $True –ItemHoldPeriod 730 -SearchQuery '"Merger" and "Acquisition"' –MessageTypes Ema il,Meetings,IM The Recoverable items folder in each mailbox is used to store items using litigation and In-place hold. The subfolders used to store items are Deletions, Purges, Discovery holds, and versions. The versions folder is used to make a copy of the items before making changes using a process called as copy-on-write. This ensures that the original as well as modified copies of the items are stored in the versions folder. All these items are indexed by Exchange search and returned by the In-Place discovery search. The Recoverable items folder has its own storage quota, and it's different for Exchange 2013/2016 and Exchange online. For Exchange 2013 and 2016 deployments, the default value of RecoverableItemsWarningQuota and RecoverableItemsQuota are set to 20 GB and 30 GB respectively. These properties can be managed using the Set-MailboxDatabase and Set-Mailbox cmdlets. It is critical for administrators to monitor your quota messages logged in the Application event logs as users will not be able to permanently delete items, nor they will be able to empty the deleted items folder if the Recoverable Items Quota is reached. The copy-on-write feature will not work for obvious reasons. For the Exchange online, if a mailbox is placed on litigation hold, the size of the Recoverable items folder is set to 100 GB. If email forwarding is enabled for mailboxes, which are on hold and a message is forwarded without a copy to the original mailbox, Exchange 2013 will not capture that message. However, if the mailbox is on Exchange 2016 or Exchange online, and the message that is forwarded meets the hold criteria for the mailbox, a copy of the message will be saved in the Recoverable items folder and can be searched using the E-Discovery search later on. Retrieving and exporting Emails for Auditing using In-Place E-discovery Now, we have seen how to place mailboxes on hold. In this topic, you will learn how to search and retrieve mailbox items using the E-discovery search in Exchange 2013, 2016 and Exchange online. The In-place eDiscovery and hold wizard in Exchange Admin Center allows authorized users to search the content based on sender, recipient, keywords, start, and end dates. The administrators can then take the actions such as estimating, previewing, copying, and exporting search results. The following screenshot shows an example of a search result: Search starting Exchange 2013 uses Microsoft Search Foundation with better indexing and querying functionalities and performances. As the same search foundation is used with SharePoint and other office products, the e-discovery search can now be performed from both Exchange and SharePoint environments with the same results. The query language used by In-Place eDiscovery is Keyword Query Language (KQL), which you will learn in the next section. The following figure shows how to use the search query using KQL syntax and time range fields: You can also specify the message types to be returned in the search results as shown in the following screenshot: Once you have estimated the search items, you can then preview and export the items to a PST file or a discovery mailbox as shown in the following screenshot: Let's see how to use the same query in PowerShell using New-MailboxSearch cmdlet. Here, -SourceMailboxes will define mailboxes to be searched between 1st January 2014 to 31st December 2014 using the -StartDate and -EndDate parameters. The -SearchQuery parameter is used for KQL (Keyword Query Language) with words such as merger or acquisition. The results will be copied to the Legal-Mergers discovery mailbox specified using the -TargetMailbox parameter. Finally, status reports are sent to the group called legal@contoso.com when the search is completed and specified using the -StatusMailRecipient parameter: New-MailboxSearch "Acquisition-Merger" -SourceMailboxes bobk@contoso.com,susanb@contoso.com -SearchQuery '"Merger" OR "Acquisition"' -TargetMailbox Legal-Mergers -StartDate "01/01/2014" -EndDate "12/31/2014" -StatusMailRecipients legal@contoso.com Retrieving content using the KQL queries KQL consists of free text keywords including words, phrases, and property restrictions. The KQL queries are case-insensitive, but the operators are not and have to be specified in uppercase. A free text expression in a KQL query can be a word without any spaces or punctuation or a phrase enclosed in double quotation marks. The following examples will return the content that have the words Merger and Acquisition: merger acquisition merge* acquisition acquistion merg* It is important to note that KQL queries do not support suffix matching. It means you cannot use a wildcard (*) operator before a word or phrase in a KQL query. We can use Property restrictions in a KQL query in the following format. There should not be any space between the Property name, the Property operator, and the Property value: <Property Name><Property Operator><Property Value> For example, author "John Doe" will return content whose author is John Doe; filetype:xlsx will return Excel spreadsheets; and title:"KQL Query" will return results with the content KQL query in the title: You can combine these property restrictions to build complex KQL queries. For example, the following query will return the content authored by John Doe or Jane Doe. It can be used in the following formats. Both the formats will return the same results: author:"John Doe" author:"Jane Doe" author:"John Doe" OR author:"Jane Doe" If you want to search for all the word documents authored by Jane Doe, you will use either of the formats: author:"Jane Doe" filetype:docx author:"Jane Doe" AND filetype:docx Now let's take a look at the use of the Proximity operators called NEAR and ONEAR, which are used the search items in close proximity to each other. The NEAR operator matches the results where the search terms are in close proximity without preserving the order of the terms: <expression> NEAR(n=5) <expression> Here, n >= 0 with a default value of 8 indicates the maximum distance used between the terms; for example, merger NEAR acquistion. This will return results where the word merger is followed by acquisition and vice versa by up to eight other words. If you want to find content where the term acquisition is followed by the term merger for up to five terms but not the other way round, use the ONEAR operator that maintains the order of the terms specified in the query. The syntax is the same as the NEAR operator with a default value of n = 8: "acquisition" ONEAR(n=5) "merger" Searching and removing emails from the server There will be times when you as an Exchange administrator would get request to log or delete specific items from the user's mailboxes. The Search-Mailbox cmdlet helps you to search a mailbox or a group of mailboxes for a specific item, and it also allows you to delete them. You need to be part of the Mailbox Search and Mailbox Import Export RBAC roles to be able to search and delete messages from a user's mailbox. The following example searches John Doe's mailbox for emails with subject "Credit Card Statement" and logs the result in the Mailbox Search Log folder in the administrator's mailbox: Search-Mailbox -Identity "John Doe" -SearchQuery 'Subject:"Credit Card statement"' -TargetMailbox administrator -TargetFolder "MailboxSearchLog" -LogOnly -LogLevel Full The following example searches all mailboxes for attachments that have word "Virus" as the file name and logs it in Mail box Search log in the administrator's mailbox: Get-Mailbox -ResultSize unlimited | Search-Mailbox -SearchQuery attachment:virus* -TargetMailbox administrator -TargetFolder "MailboxSearchLog" -LogOnly -LogLevel Full You can use the search mailbox to delete content as well. For example, the following cmdlet will delete all emails with subject line "Test Email" from Amy Albert's mailbox: Search-Mailbox -Identity "Amy Albert" -SearchQuery 'Subject:"Test Email"' -DeleteContent If you want to keep a backup of Amy Albert's mailbox content to a "BackupMailbox" before permanently deleting them, use the following command: Search-Mailbox -Identity "Amy Albert" -SearchQuery 'Subject:"Test Email"' -TargetMailbox "BackupMailbox" -TargetFolder "amya-DeletedMessages" -LogLevel Full -DeleteContent Enable Auditing and understanding its usage We will discuss about the following two types of audit logs available in Exchange 2013: Administrator audit logs Mailbox audit logs Administrator audit logs Administrator audit logs are used to log when a cmdlet is executed from Exchange Management Shell or Exchange Admin Center except the cmdlets that are used to display information such as the Get-* and Search-* cmdlets. By default, Administrator audit log is enabled for new Exchange 2013/2016 installations. The following command will audit all cmdlets. Note that this is the default behavior. So, if this is a new installation of Exchange 2013 and 2016, you don't have to make any changes. You have to only run this if you have made some changes using the Set-AdminAuditLogConfig cmdlet earlier: Set-AdminAuditLogConfig -AdminAuditLogCmdlets * Now, let's say you have a group of delegated administrators managing your Exchange environment, and you want to ensure that all the management tasks are logged. For example, you want to audit cmdlets that make changes to the mailbox, distribution groups, and management roles. You will type the following cmdlet: Set-AdminAuditLogConfig -AdminAuditLogCmdlets *Mailbox,*Management*,*DistributionGroup* The previous command will audit the cmdlets along with the specified parameters. You can take this a step further by specifying which parameters you want to monitor. For example, you are trying to understand why there is an unequal distribution of mailboxes in your databases and incorrect entries in the Custom Attribute properties for your user mailboxes. You will run the following command that will only monitor these two properties: Set-AdminAuditLogConfig -AdminAuditLogParameters Database,Custom* By default, 90 days is the age of the audit logs and can be changed using the -AdminAuditLogAgeLimit parameter. The following command sets the audit login age to 2 years: Set-AdminAuditLogConfig -AdminAuditLogAgeLimit 730.00:00:00 By default, the cmdlet with a Test verb is not logged as it generates lot of data. But, if you are troubleshooting an issue and want to keep a record of it for a later review, you can enable them using this: Set-AdminAuditLogConfig -TestCmdletLoggingEnabled $True Disabling and enabling to view the admin audit log settings can be done using the following commands: Set-AdminAuditLogConfig -AdminAuditLogEnabled $False Set-AdminAuditLogConfig -AdminAuditLogEnabled $True Get-AdminAuditLogConfig Once Auditing is enabled, you can search the audit logs using the Search-AdminAuditLog and New-AdminAuditLogsearch cmdlets. The following example will search the logs for the Set-Mailbox cmdlets with the following parameters from 1st January 2014 to 1st December 2014 for users—Holly Holt, Susan Burk, and John Doe: Search-AdminAuditLog -Cmdlets Set-Mailbox -Parameters ProhibitSendQuota,ProhibitSendReceiveQuota,IssueWarningQuota -StartDate 01/01/2014 -EndDate 12/01/2014 -UserIds hollyh,susanb,johnd This command will search for any changes made for Amy Albert's mailbox configuration from 1st July to 1st October 2015: Search-AdminAuditLog -StartDate 07/01/2015 -EndDate 10/01/2015 -ObjectID contoso.com/Users/amya This cmdlet is similar to the previous cmdlet with one difference that it uses the parameter called -StatusMailRecipients to send email with the subject line a called "Mailbox Properties Changes" to amya@contoso.com: New-AdminAuditLogSearch -Cmdlets Set-Mailbox -Parameters ProhibitSendQuota, ProhibitSendReceiveQuota, IssueWarningQuota, MaxSendSize, MaxReceiveSize -StartDate 08/01/2015 -EndDate 10/01/2015 -UserIds hollyh,susanb,johnd -StatusMailRecipients amya@contoso.com -Name "Mailbox Properties changes" Mailbox audit logs Mailbox audit logging feature in Exchange 2013 and 2016 allows you to log mailbox access by owners, delegates, and administrators. They are stored in Recoverable Items in the Audits subfolder. By default, the logs are retained for up to 90 days. You need to use Set-Mailbox with the AuditLogAgeLimit parameter to increase the retention period of the audit logs. The following command will enable mailbox audit logging for John Doe's mailbox, and the logs will be retained for 6 months: Set-Mailbox -Identity "John Doe" -AuditEnabled $true -AuditLogAgeLimit 180.00:00:00 The command disables audit logging for Holly Holt's mailbox: Set-Mailbox -Identity "Holly Holt" -AuditEnabled $false If you just want to log the SendAs and SendOnBehalf actions on Susan Burk's mailbox, type this: Set-Mailbox -Identity "Susan Burk" -AuditDelegate SendAs,SendOnBehalf -AuditEnabled $true The following command logs the Hard Delete action by the Mailbox owner for Amy Albert's mailbox: Set-Mailbox -Identity "Amy Albert" -AuditOwner HardDelete -AuditEnabled $true Now that we have enabled auditing, let's see how to search audit logs for the mailboxes using the Search-MailboxAuditLog cmdlet. The following example searches the audit logs for mailboxes of John Doe, Amy Albert, and Holly Holt for the actions performed by logon types called Admin and Delegate from 1st September to 1st October 2015. A maximum of 2000 results will be displayed as specified by the Result size parameter: Search-MailboxAuditLog -Mailboxes johnd,amya,hollyh -LogonTypes Admin,Delegate -StartDate 9/1/2015 -EndDate 10/1/2015 -ResultSize 2000 You can use pipelines and search the operation of Hard Delete in this example with the Where-Object cmdlet in Susan Burk's mailbox from 1st September to 17th September 2015: Search-MailboxAuditLog -Identity susanb -LogonTypes Owner -ShowDetails -StartDate 9/1/2015 -EndDate 9/17/2015 | Where-Object {$_.Operation -eq "HardDelete"} Once you have enabled the mailbox audit logging, you can also use Exchange Admin Center by navigating to compliance management, auditing tab and Run a non-owner mailbox access report.... The following screenshot shows the search criteria that you can use to search the mailboxes accessed by non-owners: Writing a basic script The Recoverable Items folder has its own storage quota and has Deletions, Versions, Purges, Audits, Discovery Holds, and Calendar Logging as subfolders. This script will loop through the mailboxes and export the size of these subfolders to a CSV file. The $Output is an empty array used later to store the output of the script. The $Mbx array stores the list of mailboxes. We then use Foreach to loop through the mailboxes in $Mbx. Note the usage of two if-else statements for the Audits and Discovery Holds section in the script, which are present to ensure that we don't get errors if the user is not enabled for Mailbox Auditing and In-Place holds respectively. We have created a new object to create a new instance of a PowerShell object and used the Add-Member cmdlet custom Properties to that object and store it in the $report variable for each mailbox in the list. The results are then added to the $Output array defined earlier. Finally, Export-CSV is used to export the output to the Recoverable Items subfolder called size.csv in the current working directory: $Output = @() Write-Host "Retrieving the List of mailboxes" $mbx = @(Get-Mailbox -Resultsize Unlimited) foreach ($Mailbox in $mbx) {     $Name = $Mailbox.Name     Write-Host "Checking $Name Mailbox"       $AuditsFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "Audits"}).FolderSize     if ($AuditsFolderSize -ne $Null) {$AuditsFoldersize} else {$AuditsFoldersize = 0}     $DiscoveryHoldsFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "DiscoveryHolds"}).FolderSize     if ($DiscoveryHoldsFoldersize -ne $Null) {$DiscoveryHoldsFoldersize} else {$DiscoveryHoldsFoldersize = 0}     $PurgesFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "Purges"}).FolderSize     $VersionsFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "Versions"}).FolderSize     $report = New-Object PSObject     $report | Add-Member NoteProperty -Name "Name" -Value $Name     $report | Add-Member NoteProperty -Name "Audits Sub Folder Size" -Value $AuditsFoldersize     $report | Add-Member NoteProperty -Name "Deletions Sub Folder Size" -Value $DeletionsFoldersize     $report | Add-Member NoteProperty -Name "DiscoveryHolds Sub Folder Size" -Value $DiscoveryHoldsFoldersize     $report | Add-Member NoteProperty -Name "Purges Sub Folder Size" -Value $PurgesFoldersize     $report | Add-Member NoteProperty -Name "Versions Sub Folder Size" -Value $VersionsFoldersize     $Output += $report     Write-Host "$Name, $AuditsFoldersize, $DeletionsFoldersize, $DiscoveryHoldsFoldersize, $PurgesFoldersize, $VersionsFoldersize" }   Write-Host "Writing output to RecoverableItemssubfolderssize.csv"   $Output | Export-CSV RecoverableItemssubfolderssize.csv -NoTypeInformation Summary In this Article, you learned the use of various types of In-place holds and eDiscovery search. You also learned how they can help organizations meet their regulatory compliance requirements. You learned how to log admin actions and mailbox access by the Administrator audit and the mailbox logging functionality in the Exchange server 2013/2016 and Exchange online. The tools and cmdlets explained in this Article will help organizations retain content that is important for them and search and send it to appropriate parties at a later date for a review. Resources for Article:   Further resources on this subject: Exchange Server 2010 Windows PowerShell: Managing Mailboxes [article] Exchange Server 2010 Windows PowerShell: Working with Distribution Groups [article] Installing Microsoft Forefront UAG [article]
Read more
  • 0
  • 0
  • 6367

article-image-fpga-mining
Packt
29 Jan 2016
6 min read
Save for later

FPGA Mining

Packt
29 Jan 2016
6 min read
In this article by Albert Szmigielski, author of the book Bitcoin Essentials, we will take a look at mining with Field-Programmable Gate Arrays, or FPGAs. These are microprocessors that can be programmed for a specific purpose. In the case of bitcoin mining, they are configured to perform the SHA-256 hash function, which is used to mine bitcoins. FPGAs have a slight advantage over GPUs for mining. The period of FPGA mining of bitcoins was rather short (just under a year), as faster machines became available. The advent of ASIC technology for bitcoin mining compelled a lot of miners to make the move from FPGAs to ASICs. Nevertheless, FPGA mining is worth learning about. We will look at the following: Pros and cons of FPGA mining FPGA versus other hardware mining Best practices when mining with FPGAs Discussion of profitability (For more resources related to this topic, see here.) Pros and cons of FPGA mining Mining with an FPGA has its advantages and disadvantages. Let's examine these in order to better understand if and when it is appropriate to use FPGAs to mine bitcoins. As you may recall, mining started on CPUs, moved over to GPUs, and then people discovered that FPGAs could be used for mining as well. Pros of FPGA mining FPGA mining is the third step in mining hardware evolution. They are faster and more efficient than GPUs. In brief, mining bitcoins with FPGAs has the following advantages: FPGAs are faster than GPUs and CPUs FPGAs are more electricity-efficient per unit of hashing than CPUs or GPUs Cons of FPGA mining FPGAs are rather difficult to source and program. They are not usually sold in stores open to the public. We have not touched upon programming FPGAs to mine bitcoins as it is assumed that the reader has already acquired preprogrammed FPGAs. There are several good resources regarding FPGA programming on the Internet. Electricity costs are also an issue with FPGAs, although not as big as with GPUs. To summarize, mining bitcoins with FPGAs has the following disadvantages: Electricity costs Hardware costs Fierce competition with other miners Best practices when mining with FPGAs Let's look at the recommended things to do when mining with FPGAs. Mining is fun, and it could also be profitable if several factors are taken into account. Make sure that all your FPGAs have adequate cooling. Additional fans beyond what is provided by the manufacturer are always a good idea. Remove dust frequently, as a buildup of dust might have a detrimental effect on cooling efficiency, and therefore, mining speed. For your particular mining machine, look up all the optimization tweaks online in order to get all the hashing power possible out of the device. When setting up a mining operation for profit, keep in mind that electricity costs will be a large percentage of your overall costs. Seek a location with the lowest electricity rates. Think about cooling costs—perhaps it would be most beneficial to mine somewhere where the climate is cooler. When purchasing FPGAs, make sure you calculate hashes per dollar of hardware costs, and also hashes per unit of electricity used. In mining, electricity has the biggest cost after hardware, and electricity will exceed the cost of the hardware over time. Keep in mind that hardware costs fall over time, so purchasing your equipment in stages rather than all at once may be desirable. To summarize, keep in mind these factors when mining with FPGAs: Adequate cooling Optimization Electricity costs Hardware cost per MH/s Benchmarks of mining speeds with different FPGAs As we have mentioned before, the Bitcoin network hash rate is really high now. Mining even with FPGAs does not guarantee profits. This is due to the fact that during the mining process, you are competing with other miners to try to solve a block. If those other miners are running a larger percentage of the total mining power, you will be at a disadvantage, as they are more likely to solve a block. To compare the mining speed of a few FPGAs, look at the following table: FPGA Mining speed (MH/s) Power used (Watts) Bitcoin Dominator X5000 100 6.8 Icarus 380 19.2 Lancelot 400 26 ModMiner Quad 800 40 Butterflylabs Mini Rig 25,200 1250 Comparison of the mining speed of different FPGAs FPGA versus GPU and CPU mining FPGAs hash much faster than any other hardware. The fastest in our list reaches 25,000 MH/s. FPGAs are faster at performing hashing calculations than both CPUs and GPUs. They are also more efficient with respect to the use of electricity per hashing unit. The increase in hashing speed in FPGAs is a significant improvement over GPUs and even more so over CPUs. The profitability of FPGA mining In calculating your potential profit, keep in mind the following factors: The cost of your FPGAs Electricity costs to run the hardware Cooling costs—FPGAs generate a decent amount of heat Your percentage of the total network hashing power To calculate the expected rewards from mining, we can do the following: First, calculate what percentage of total hashing power you command. To look up the network mining speed, execute the getmininginfo command in the console of the Bitcoin Core wallet. We will do our calculations with an FPGA that can hash at 1 GH/s. If the Bitcoin network hashes at 400,000 TH/s, then our proportion of the hashing power is 0.001/400 000 = 0.0000000025 of the total mining power. A bitcoin block is found, on average, every 10 minutes, which makes six per hour and 144 for a 24-hour period. The current reward per block is 25 BTC; therefore, in a day, we have 144 * 25 = 3600 BTC mined. If we command a certain percentage of the mining power, then on average we should earn that proportion of newly minted bitcoins. Multiplying our portion of the hashing power by the number of bitcoins mined daily, we arrive at the following: 0.0000000025 * 3600 BTC = 0.000009 BTC As one can see, this is roughly $0.0025 USD for a 24-hour period. For up-to-date profitability information, you can look at https://www.multipool.us/, which publishes the average profitability per gigahash of mining power. Summary In this article, we explored FPGA mining. We examined the advantages and disadvantages of mining with FPGAs. It would serve any miner well to ponder them over when deciding to start mining or when thinking about improving current mining operations. We touched upon some best practices that we recommend keeping in mind. We also investigated the profitability of mining, given current conditions. A simple way of calculating your average earnings was also presented. We concluded that mining competition is fierce; therefore, any improvements you can make will serve you well. Resources for Article:  Further resources on this subject:  Bitcoins – Pools and Mining [article] Protecting Your Bitcoins [article] E-commerce with MEAN [article]  
Read more
  • 0
  • 0
  • 7679
article-image-aquarium-monitor
Packt
27 Jan 2016
16 min read
Save for later

Aquarium Monitor

Packt
27 Jan 2016
16 min read
In this article by Rodolfo Giometti, author of the book BeagleBone Home Automation Blueprints, we'll see how to realize an aquarium monitor where we'll be able to record all the environment data and then control the life of our loved fishes from a web panel. (For more resources related to this topic, see here.) By using specific sensors, you'll learn how to monitor your aquarium with the possibility to set alarms, log the aquarium data (water temperature), and to do some actions like cooling the water and feeding the fishes. Simply speaking, we're going to implement a simple aquarium web monitor with a real-time live video, some alarms in case of malfunctioning, and a simple temperature data logging that allows us to monitor the system from a standard PC as well as from a smartphone or tablet, without using any specifying mobile app, but just using the on-board standard browser only. The basic of functioning This aquarium monitor is a good (even if very simple) example about how a web monitoring system should be implemented, giving to the reader some basic ideas about how a mid-complex system works and how we can interact with it in order to modify some system settings, displaying some alarms in case of malfunctioning and plotting a data logging on a PC, smartphone, or tablet. We have a periodic task that collects the data and then decides what to do. However, this time, we have a user interface (the web panel) to manage, and a video streaming to be redirected into a web page. Note also that in this project, we need an additional power supply in order to power and manage 12V devices (like a water pump, a lamp, and a cooler) with the BeagleBone Black, which is powered at 5V instead. Note that I'm not going to test this prototype on a real aquarium (since I don't have one), but by using a normal tea cup filled with water! So you should consider this project for educational purpose only, even if, with some enhancements, it could be used on a real aquarium too! Setting up the hardware About the hardware, there are at least two major issues to be pointed out: first of all, the power supply. We have two different voltages to use due to the fact the water pump, the lamp, and the cooler are 12V powered, while the other devices are 5V/3.3V powered. So, we have to use a dual output power source (or two different power sources) to power up our prototype. The second issue is about using a proper interface circuitry between the 12V world and the 5V one in such a way as to not damage the BeagleBone Black or other devices. Let me remark that a single GPIO of the BeagleBone Black can manage a voltage of 3.3V, so we need a proper circuitry to manage a 12V device. Setting up the 12V devices As just stated, these devices need special attention and a dedicated 12V power line which, of course, cannot be the one we wish to use to supply the BeagleBone Black. On my prototype, I used a 12V power supplier that can supply a current till 1A. These characteristics should be enough to manage a single water pump, a lamp, and a cooler. After you get a proper power supplier, we can pass to show the circuitry to use to manage the 12V devices. Since all of them are simple on/off devices, we can use a relay to control them. I used the device shown in the following image where we have 8 relays: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/5v-relays-array Then, the schematic to connect a single 12V device is shown in the following diagram: Simply speaking, for each device, we can turn on and off the power supply simply by moving a specific GPIO of our BeagleBone Black. Note that each relays of the array board can be managed in direct or inverse logic by simply choosing the right connections accordingly as reported on the board itself, that is, we can decide that, by putting the GPIO into a logic 0 state, we can activate the relay, and then, turning on the attached device, while putting the GPIO into a logic 1 state, we can deactivate the relay, and then turn off the attached device. By using the following logic, when the LED of a relay is turned on, the corresponding device is powered on. The BeagleBone Black's GPIOs and the pins of the relays array I used with 12V devices are reported in the following table: Pin Relays Array pin 12V Device P8.10 - GPIO66 3 Lamp P8.9 - GPIO69 2 Cooler P8.12 - GPIO68 1 Pump P9.1 - GND GND   P9.5 - 5V Vcc   To test the functionality of each GPIO line, we can use the following command to set up the GPIO as an output line at high state: root@arm:~# ./bin/gpio_set.sh 68 out 1 Note that the off state of the relay is 1, while the on state is 0. Then, we can turn the relay on and off by just writing 0 and 1 into /sys/class/gpio/gpio68/value file, as follows: root@arm:~# echo 0 > /sys/class/gpio/gpio68/value root@arm:~# echo 1 > /sys/class/gpio/gpio68/value Setting up the webcam The webcam I'm using in my prototype is a normal UVC-based webcam, but you can safely use another one which is supported by the mjpg-streamer tool. See the mjpg-streamer project's home site for further information at http://sourceforge.net/projects/mjpg-streamer/. Once connected to the BeagleBone Black USB host port, I get the following kernel activities: usb 1-1: New USB device found, idVendor=045e, idProduct=0766 usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 1-1: Product: Microsoft LifeCam VX-800 usb 1-1: Manufacturer: Microsoft ... uvcvideo 1-1:1.0: usb_probe_interface uvcvideo 1-1:1.0: usb_probe_interface - got id uvcvideo: Found UVC 1.00 device Microsoft LifeCam VX-800 (045e:0766) Now, a new driver called uvcvideo is loaded into the kernel: root@beaglebone:~# lsmod Module Size Used by snd_usb_audio 95766 0 snd_hwdep 4818 1 snd_usb_audio snd_usbmidi_lib 14457 1 snd_usb_audio uvcvideo 53354 0 videobuf2_vmalloc 2418 1 uvcvideo ... Ok, now, to have a streaming server, we need to download the mjpg-streamer source code and compile it. We can do everything within the BeagleBone Black itself with the following command: root@beaglebone:~# svn checkout svn://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code The svn command is part of the subversion package and can be installed by using the following command: root@beaglebone:~# aptitude install subversion After the download is finished, we can compile and install the code by using the following command line: root@beaglebone:~# cd mjpg-streamer-code/mjpg-streamer/ && make && make install If no errors are reported, you should now be able to execute the new command as follows, where we ask for the help message: root@beaglebone:~# mjpg_streamer --help ----------------------------------------------------------------------- Usage: mjpg_streamer -i | --input "<input-plugin.so> [parameters]" -o | --output "<output-plugin.so> [parameters]" [-h | --help ]........: display this help [-v | --version ].....: display version information [-b | --background]...: fork to the background, daemon mode ... If you get an error like the following: make[1]: Entering directory `/root/mjpg-streamer-code/mjpg-streamer/plugins/input_testpicture' convert pictures/960x720_1.jpg -resize 640x480! pictures/640x480_1.jpg /bin/sh: 1: convert: not found make[1]: *** [pictures/640x480_1.jpg] Error 127 ...it means that your system misses the convert tool. You can install it by using the usual aptitude command: root@beaglebone:~# aptitude install imagemagick OK, now we are ready to test the webcam. Just run the following command line and then point a web browser to the address http://192.168.32.46:8080/?action=stream (where you should replace my IP address 192.168.32.46 with your BeagleBone Black's one) in order to get the live video from your webcam: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -w /var/www/" Note that you can use the USB ethernet address 192.168.7.2 too if you're not using the BeagleBone Black's Ethernet port. If everything works well, you should get something as shown in the following screenshot: If you get an error as follows: bind: Address already in use ...it means that some other process is holding the 8080 port, and most probably, it's occupied by the Bone101 service. To disable it, you can use the following commands: root@BeagleBone:~# systemctl stop bonescript.socket root@BeagleBone:~# systemctl disable bonescript.socket rm '/etc/systemd/system/sockets.target.wants/bonescript.socket' Or, you can simply use another port, maybe port 8090, with the following command line: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -p 8090 -w /var/www/" Connecting the temperature sensor The temperature sensor used in my prototype is the one shown in the following screenshot: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/waterproof-temperature-sensor. The datasheet of this device is available at http://datasheets.maximintegrated.com/en/ds/DS18B20.pdf. As you can see, it's a waterproof device so we can safely put it into the water to get its temperature. This device is a 1-wire device and we can get access to it by using the w1-gpio driver, which emulates a 1-wire controller by using a standard BeagleBone Black GPIO pin. The electrical connection must be done according to the following table, keeping in mind that the sensor has three colored connection cables: Pin Cable color P9.4 - Vcc Red P8.11 - GPIO1_13 White P9.2 - GND Black Interested readers can follow this URL for more information about how 1-Wire works: http://en.wikipedia.org/wiki/1-Wire Keep in mind that, since our 1-wire controller is implemented in software, we have to add a pull-up resistor of 4.7KΩ between the red and white cable in order to make it work! Once all connections are in place, we can enable the 1-wire controller on the P8.11 pin of the BeagleBone Black's expansion connector. The following snippet shows the relevant code where we enable the w1-gpio driver and assign to it the proper GPIO: fragment@1 { target = <&ocp>; __overlay__ { #address-cells = <1>; #size-cell = <0>; status = "okay"; /* Setup the pins */ pinctrl-names = "default"; pinctrl-0 = <&bb_w1_pins>; /* Define the new one-wire master as based on w1-gpio * and using GPIO1_13 */ onewire@0 { compatible = "w1-gpio"; gpios = <&gpio2 13 0>; }; }; }; To enable it, we must use the dtc program to compile it as follows: root@beaglebone:~# dtc -O dtb -o /lib/firmware/BB-W1-GPIO-00A0.dtbo -b 0 -@ BB-W1-GPIO-00A0.dts Then, we have to load it into the kernel with the following command: root@beaglebone:~# echo BB-W1-GPIO > /sys/devices/bone_capemgr.9/slots If everything works well, we should see a new 1-wire device under the /sys/bus/w1/devices/ directory, as follows: root@beaglebone:~# ls /sys/bus/w1/devices/ 28-000004b541e9 w1_bus_master1 The new temperature sensor is represented by the directory named 28-000004b541e9. To read the current temperature, we can use the cat command on the w1_slave file as follows: root@beaglebone:~# cat /sys/bus/w1/devices/28-000004b541e9/w1_slave d8 01 00 04 1f ff 08 10 1c : crc=1c YES d8 01 00 04 1f ff 08 10 1c t=29500 Note that your sensors have a different ID, so in your system you'll get a different path name in the /sys/bus/w1/devices/28-NNNNNNNNNNNN/w1_slave form. In the preceding example, the current temperature is t=29500, which is expressed in millicelsius degree (m°C), so it's equivalent to 29.5° C. The reader can take a look at the book BeagleBone Essentials, edited by Packt Publishing and written by the author of this book, in order to have more information regarding the management of the 1-wire devices on the BeagleBone Black. Connecting the feeder The fish feeder is a device that can release some feed by moving a motor. Its functioning is represented in the following diagram: In the closed position, the motor is at horizontal position, so the feed cannot fall down, while in the open position, the motor is at vertical position, so that the feed can fall down. I have no real fish feeder, but looking at the above functioning we can simulate it by using the servo motor shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/nano-servo-motor. The datasheet of this device is available at http://hitecrcd.com/files/Servomanual.pdf. This device can be controlled in position, and it can rotate by 90 degrees with a proper PWM signal in input. In fact, reading into the datasheet, we discover that the servo can be managed by using a periodic square waveform with a period (T) of 20 ms and with an high state time (thigh) between 0.9 ms and 2.1 ms with 1.5 ms as (more or less) center. So, we can consider the motor in the open position when thigh =1 ms and in the close position when thigh=2 ms (of course, these values should be carefully calibrated once the feeder is really built up!) Let's connect the servo as described by the following table: Pin Cable color P9.3 - Vcc Red P9.22 - PWM Yellow P9.1 - GND Black Interested readers can find more details about the PWM at https://en.wikipedia.org/wiki/Pulse-width_modulation. To test the connections, we have to enable one PWM generator of the BeagleBone Black. So, to respect the preceding connections, we need the one which has its output line on pin P9.22 of the expansion connectors. To do it, we can use the following commands: root@beaglebone:~# echo am33xx_pwm > /sys/devices/bone_capemgr.9/slots root@beaglebone:~# echo bone_pwm_P9_22 > /sys/devices/bone_capemgr.9/slots Then, in the /sys/devices/ocp.3 directory, we should find a new entry related to the new enabled PWM device, as follows: root@beaglebone:~# ls -d /sys/devices/ocp.3/pwm_* /sys/devices/ocp.3/pwm_test_P9_22.12 Looking at the /sys/devices/ocp.3/pwm_test_P9_22.12 directory, we see the files we can use to manage our new PWM device: root@beaglebone:~# ls /sys/devices/ocp.3/pwm_test_P9_22.12/ driver duty modalias period polarity power run subsystem uevent As we can deduce from the preceding file names, we have to properly set up the values into the files named as polarity, period and duty. So, for instance, the center position of the servo can be achieved by using the following commands: root@beaglebone:~# echo 0 > /sys/devices/ocp.3/pwm_test_P9_22.12/polarity root@beaglebone:~# echo 20000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/period root@beaglebone:~# echo 1500000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty The polarity is set to 0 to invert it, while the values written in the other files are time values expressed in nanoseconds, set at a period of 20 ms and a duty cycle of 1.5 ms, as requested by the datasheet (time values are all in nanoseconds.) Now, to move the gear totally clockwise, we can use the following command: root@beaglebone:~# echo 2100000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty On the other hand, the following command is to move it totally anticlockwise: root@beaglebone:~# echo 900000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty So, by using the following command sequence, we can open and then close (with a delay of 1 second) the gate of the feeder: echo 1000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty sleep 1 echo 2000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty Note that by simply modifying the delay, we can control how much feed should fall down when the feeder is activated. The water sensor The water sensor I used is shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/water_sensor. This is a really simple device that implements what is shown in the following screenshot, where the resistor (R) has been added to limit the current when the water closes the circuit: When a single drop of water touches two or more teeth of the comb in the schematic, the circuit is closed and the output voltage (Vout) drops from Vcc to 0V. So, if we wish to check the water level in our aquarium, that is, if we wish to check for a water leakage, we can manage to put the aquarium into a sort of saucer, and then this device into it, so, if a water leakage occurs, the water is collected by the saucer, and the output voltage from the sensor should move from Vcc to GND. The GPIO used for this device are shown in the following table: Pin Cable color P9.3 - 3.3V Red P8.16 - GPIO67 Yellow P9.1 - GND Black To test the connections, we have to define GPIO67 as an input line with the following command: root@beaglebone:~# ../bin/gpio_set.sh 67 in Then, we can try to read the GPIO status while the sensor is into the water and when it is not, by using the following two commands: root@beaglebone:~# cat /sys/class/gpio/gpio67/value 0 root@beaglebone:~# cat /sys/class/gpio/gpio67/value 1 The final picture The following screenshot shows the prototype I realized to implement this project and to test the software. As you can see, the aquarium has been replaced by a cup of water! Note that we have two external power suppliers: the usual one at 5V for the BeagleBone Black, and the other one with an output voltage of 12V for the other devices (you can see its connector in the upper right corner on the right of the webcam.) Summary In this article we've discovered how to interface our BeagleBone Black to several devices with a different power supply voltage and how to manage a 1-wire device and PWM one. Resources for Article: Further resources on this subject: Building robots that can walk [article] Getting Your Own Video and Feeds [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 25795

article-image-customizing-and-automating-google-applications
Packt
27 Jan 2016
7 min read
Save for later

Customizing and Automating Google Applications

Packt
27 Jan 2016
7 min read
In this article by the author, Ramalingam Ganapathy, of the book, Learning Google Apps Script, we will see how to create new projects in sheets and send an email with inline image and attachments. You will also learn to create clickable buttons, a custom menu, and a sidebar. (For more resources related to this topic, see here.) Creating new projects in sheets Open any newly created google spreadsheet (sheets). You will see a number of menu items at the top of the window. Point your mouse to it and click on Tools. Then, click on Script editor as shown in the following screenshot: A new browser tab or window with a new project selection dialog will open. Click on Blank Project or close the dialog. Now, you have created a new untitled project with one script file (Code.gs), which has one default empty function (myFunction). To rename the project, click on project title (at the top left-hand side of the window), and then a rename dialog will open. Enter your favored project name, and then click on the OK button. Creating clickable buttons Open the script editor in a newly created or any existing Google sheet. Select the cell B3 or any other cell. Click on Insert and Drawing as shown in the following screenshot: A drawing editor window will open. Click on the Textbox icon and click anywhere on the canvas area. Type Click Me. Resize the object so as to only enclose the text as shown in the screenshot here: Click on Save & Close to exit from the drawing editor. Now, the Click Me image will be inserted at the top of the active cell (B3) as shown in the following screenshot: You can drag this image anywhere around the spreadsheet. In Google sheets, images are not anchored to a particular cell, it can be dragged or moved around. If you right-click on the image, then a drop-down arrow at the top right corner of the image will be visible. Click on the Assign script menu item. A script assignment window will open as shown here: Type "greeting" or any other name as you like but remember its name (so as to create a function with the same name for the next steps). Click on the OK button. Now, open the script editor in the same spreadsheet. When you the open script editor, the project selector dialog will open. You'll close or select blank project. A default function called myFunction will be there in the editor. Delete everything in the editor and insert the following code. function greeting() { Browser.msgBox("Greeting", "Hello World!", Browser.Buttons.OK); } Click on the save icon and enter a project name if asked. You have completed coding your greeting function. Activate the spreadsheet tab/window, and click on your button called Click Me. Then, an authorization window will open; click on Continue. In the successive Request for Permission window, click on Allow button. As soon as you click on Allow and the permission gets dialog disposed, your actual greeting message box will open as shown here: Click on OK to dispose the message box. Whenever you click on your button, this message box will open. Creating a custom menu Can you execute the function greeting without the help of the button? Yes, in the script editor, there is a Run menu. If you click on Run and greeting, then the greeting function will be executed and the message box will open. Creating a button for every function may not be feasible. Although, you cannot alter or add items to the application's standard menu (except the Add-on menu), such as File, Edit and View, and others, you can add the custom menu and its items. For this task, create a new Google docs document or open any existing document. Open the script editor and type these two functions: function createMenu() { DocumentApp.getUi() .createMenu("PACKT") .addItem("Greeting", "greeting") .addToUi(); } function greeting() { var ui = DocumentApp.getUi(); ui.alert("Greeting", "Hello World!", ui.ButtonSet.OK); } In the first function, you use the DocumentApp class, invoke the getUi method, and consecutively invoke the createMenu, addItem, and addToUi methods by method chaining. The second function is familiar to you that you have created in the previous task but this time with the DocumentApp class and associated methods. Now, run the function called createMenu and flip to the document window/tab. You can notice a new menu item called PACKT added next to the Help menu. You can see the custom menu PACKT with an item Greeting as shown next. The item label called Greeting is associated with the function called greeting: The menu item called Greeting works the same way as your button created in previous task. The drawback with this method of inserting custom menu is used to show up the custom menu. You need to run createMenu every time within the script editor. Imagine how your user can use this greeting function if he/she doesn't know about the GAS and script editor? Think that your user might not be a programmer as you. To enable your users to execute the selected GAS functions, then you should create a custom menu and make it visible as soon as the application is opened. To do so, rename the function called createMenu to onOpen, that's it. Creating a sidebar Sidebar is a static dialog box and it will be included in the right-hand side of the document editor window. To create a sidebar, type the following code in your editor: function onOpen() { var htmlOutput = HtmlService .createHtmlOutput('<button onclick="alert('Hello World!');">Click Me</button>') .setTitle('My Sidebar'); DocumentApp.getUi() .showSidebar(htmlOutput); } In the previous code, you use HtmlService and invoke its method called createHtmlOutput and consecutively invoke the setTitle method. To test this code, run the onOpen function or the reload document. The sidebar will be opened in the right-hand side of the document window as shown in the following screenshot. The sidebar layout size is a fixed one that means you cannot change, alter, or resize it: The button in the sidebar is an HTML element, not a GAS element, and if clicked, it opens the browser interface's alert box. Sending an email with inline image and attachments To embed images such as logo in your email message, you may use HTML codes instead of some plain text. Upload your image to Google Drive and get and use the file ID in the code: function sendEmail(){ var file = SpreadsheetApp.getActiveSpreadsheet() .getAs(MimeType.PDF); var image = DriveApp.getFileById("[[image file's id in Drive ]]").getBlob(); var to = "[[receiving email id]]"; var message = '<img src="cid:logo" /> Embedding inline image example.</p>'; MailApp.sendEmail( to, "Email with inline image and attachment", "", { htmlBody:message, inlineImages:{logo:image}, attachments:[file] } ); } Summary In this article, you learned how to customize and automate Google applications with a few examples. Many more useful and interesting applications have been described in the actual book.  Resources for Article: Further resources on this subject: How to Expand Your Knowledge [article] Google Apps: Surfing the Web [article] Developing Apps with the Google Speech Apis [article]
Read more
  • 0
  • 0
  • 7659
Modal Close icon
Modal Close icon