Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-working-webcam-and-pi-camera
Packt
09 Feb 2016
13 min read
Save for later

Working with a Webcam and Pi Camera

Packt
09 Feb 2016
13 min read
In this article by Ashwin Pajankar and Arush Kakkar, the author of the book Raspberry Pi By Example we will learn how to use different types and uses of cameras with our Pi. Let's take a look at the topics we will study and implement in this article: Working with a webcam Crontab Timelapse using a webcam Webcam video recording and playback Pi Camera and Pi NOIR comparison Timelapse using Pi Camera The PiCamera module in Python (For more resources related to this topic, see here.) Working with webcams USB webcams are a great way to capture images and videos. Raspberry Pi supports common USB webcams. To be on the safe side, here is a list of the webcams supported by Pi: http://elinux.org/RPi_USB_Webcams. I am using a Logitech HD c310 USB Webcam. You can purchase it online, and you can find the product details and the specifications at http://www.logitech.com/en-in/product/hd-webcam-c310. Attach your USB webcam to Raspberry Pi through the USB port on Pi and run the lsusb command in the terminal. This command lists all the USB devices connected to the computer. The output should be similar to the following output depending on which port is used to connect the USB webcam:   pi@raspberrypi ~/book/chapter04 $ lsusb Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 148f:2070 Ralink Technology, Corp. RT2070 Wireless Adapter Bus 001 Device 007: ID 046d:081b Logitech, Inc. Webcam C310 Bus 001 Device 006: ID 1c4f:0003 SiGma Micro HID controller Bus 001 Device 005: ID 1c4f:0002 SiGma Micro Keyboard TRACER Gamma Ivory Then, install the fswebcam utility by running the following command: sudo apt-get install fswebcam The fswebcam is a simple command-line utility that captures images with webcams for Linux computers. Once the installation is done, you can use the following command to create a directory for output images: mkdir /home/pi/book/output Then, run the following command to capture the image: fswebcam -r 1280x960 --no-banner ~/book/output/camtest.jpg This will capture an image with a resolution of 1280 x 960. You might want to try another resolution for your learning. The --no-banner command will disable the timestamp banner. The image will be saved with the filename mentioned. If you run this command multiple times with the same filename, the image file will be overwritten each time. So, make sure that you change the filename if you want to save previously captured images. The text output of the command should be similar to the following: --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd5 Captured frame in 0.00 seconds. --- Processing captured image... Disabling banner. Writing JPEG image to '/home/pi/book/output/camtest.jpg'. Crontab A cron is a time-based job scheduler in Unix-like computer operating systems. It is driven by a crontab (cron table) file, which is a configuration file that specifies shell commands to be run periodically on a given schedule. It is used to schedule commands or shell scripts to run periodically at a fixed time, date, or interval. The syntax for crontab in order to schedule a command or script is as follows: 1 2 3 4 5 /location/command Here, the following are the definitions: 1: Minutes (0-59) 2: Hours (0-23) 3: Days (0-31) 4: Months [0-12 (1 for January)] 5: Days of the week [0-7 ( 7 or 0 for Sunday)] /location/command: The script or command name to be scheduled The crontab entry to run any script or command every minute is as follows: * * * * * /location/command 2>&1 In the next section, we will learn how to use crontab to schedule a script to capture images periodically in order to create the timelapse sequence. You can refer to this URL for more details oncrontab: http://www.adminschoice.com/crontab-quick-reference. Creating a timelapse sequence using fswebcam Timelapse photography means capturing photographs in regular intervals and playing the images with a higher frequency in time than those that were shot. For example, if you capture images with a frequency of one image per minute for 10 hours, you will get 600 images. If you combine all these images in a video with 30 images per second, you will get 10 hours of timelapse video compressed in 20 seconds. You can use your USB webcam with Raspberry Pi to achieve this. We already know how to use the Raspberry Pi with a Webcam and the fswebcam utility to capture an image. The trick is to write a script that captures images with different names and then add this script in crontab and make it run at regular intervals. Begin with creating a directory for captured images: mkdir /home/pi/book/output/timelapse Open an editor of your choice, write the following code, and save it as timelapse.sh: #!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") fswebcam -r 1280x960 --no-banner /home/pi/book/output/timelapse/garden_$DATE.jpg Make the script executable using: chmod +x timelapse.sh This shell script captures the image and saves it with the current timestamp in its name. Thus, we get an image with a new filename every time as the file contains the timestamp. The second line in the script creates the timestamp that we're using in the filename. Run this script manually once, and make sure that the image is saved in the /home/pi/book/output/timelapse directory with the garden_<timestamp>.jpg name. To run this script at regular intervals, we need to schedule it in crontab. The crontab entry to run our script every minute is as follows: * * * * * /home/pi/book/chapter04/timelapse.sh 2>&1 Open the crontab of the Pi user with crontab –e. It will open crontab with nano as the editor. Add the preceding line to crontab, save it, and exit it. Once you exit crontab, it will show the following message: no crontab for pi - using an empty one crontab: installing new crontab Our timelapse webcam setup is now live. If you want to change the image capture frequency, then you have to change the crontab settings. To set it every 5 minutes, change it to */5 * * * *. To set it for every 2 hours, use 0 */2 * * *. Make sure that your MicroSD card has enough free space to store all the images for the time duration for which you need to keep your timelapse setup. Once you capture all the images, the next part is to encode them all in a fast playing video, preferably 20 to 30 frames per second. For this part, the mencoder utility is recommended. The following are the steps to create a timelapse video with mencoder on a Raspberry Pi or any Debian/Ubuntu machine: Install mencoder using sudo apt-get install mencoder. Navigate to the output directory by issuing: cd /home/pi/book/output/timelapse Create a list of your timelapse sequence images using: ls garden_*.jpg > timelapse.txt Use the following command to create a video: mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4:aspect=16/9:vbitrate=8000000 -vf scale=1280:960 -o timelapse.avi -mf type=jpeg:fps=30 mf://@timelapse.txt This will create a video with name timelapse.avi in the current directory with all the images listed in timelapse.txt with a 30 fps frame rate. The statement contains the details of the video codec, aspect ratio, bit rate, and scale. For more information, you can run man mencoder on Command Prompt. We will cover how to play a video in the next section. Webcam video recording and playback We can use a webcam to record live videos using avconv. Install avconv using sudo apt-get install libav-tools. Use the following command to record a video: avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi It will show following output on the screen. pi@raspberrypi ~ $ avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi avconv version 9.14-6:9.14-1rpi1rpi1, Copyright (c) 2000-2014 the Libav developers built on Jul 22 2014 15:08:12 with gcc 4.6 (Debian 4.6.3-14+rpi1) [video4linux2 @ 0x5d6720] The driver changed the time per frame from 1/25 to 2/15 [video4linux2 @ 0x5d6720] Estimating duration from bitrate, this may be inaccurate Input #0, video4linux2, from '/dev/video0': Duration: N/A, start: 629.030244, bitrate: 147456 kb/s Stream #0.0: Video: rawvideo, yuyv422, 1280x960, 147456 kb/s, 1000k tbn, 7.50 tbc Output #0, avi, to '/home/pi/book/output/VideoStream.avi': Metadata: ISFT : Lavf54.20.4 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> mpeg4) Press ctrl-c to stop encoding frame= 182 fps= 7 q=31.0 Lsize= 802kB time=7.28 bitrate= 902.4kbits/s video:792kB audio:0kB global headers:0kB muxing overhead 1.249878% Received signal 2: terminating. You can terminate the recording sequence by pressing Ctrl + C. We can play the video using omxplayer. It comes with the latest raspbian, so there is no need to install it. To play a file with the name vid.mjpg, use the following command: omxplayer ~/book/output/VideoStream.avi It will play the video and display some output similar to the one here: pi@raspberrypi ~ $ omxplayer ~/book/output/VideoStream.avi Video codec omx-mpeg4 width 1280 height 960 profile 0 fps 25.000000 Subtitle count: 0, state: off, index: 1, delay: 0 V:PortSettingsChanged: 1280x960@25.00 interlace:0 deinterlace:0 anaglyph:0 par:1.00 layer:0 have a nice day ;) Try playing timelapse and record videos using omxplayer. Working with the Pi Camera and NoIR Camera Modules These camera modules are specially manufactured for Raspberry Pi and work with all the available models. You will need to connect the camera module to the CSI port, located behind the Ethernet port, and activate the camera using the raspi-config utility if you haven't already. You can find the video instructions to connect the camera module to Raspberry Pi at http://www.raspberrypi.org/help/camera-module-setup/. This page lists the types of camera modules available: http://www.raspberrypi.org/products/. Two types of camera modules are available for the Pi. These are Pi Camera and Pi NoIR camera, and they can be found at https://www.raspberrypi.org/products/camera-module/ and https://www.raspberrypi.org/products/pi-noir-camera/, respectively. The following image shows Pi Camera and Pi NoIR Camera boards side by side: The following image shows the Pi Camera board connected to the Pi: The following is an image of the Pi camera board placed in the camera case: The main difference between Pi Camera and Pi NoIR Camera is that Pi Camera gives better results in good lighting conditions, whereas Pi NoIR (NoIR stands for No-Infra Red) is used for low light photography. To use NoIR Camera in complete darkness, we need to flood the object to be photographed with infrared light. This is a good time to take a look at the various enclosures for Raspberry Pi Models. You can find various cases available online at https://www.adafruit.com/categories/289. An example of a Raspberry Pi case is as follows: Using raspistill and raspivid To capture images and videos using the Raspberry Pi camera module, we need to use raspistill and raspivid utilities. To capture an image, run the following command: raspistill -o cam_module_pic.jpg This will capture and save the image with name cam_module_pic.jpg in the current directory. To capture a 20 second video with the camera module, run the following command: raspivid –o test.avi –t 20000 This will capture and save the video with name test.avi in the current directory. Unlike fswebcam and avconv, raspistill and raspivid do not write anything to the console. So, you need to check the current directory for the output. Also, one can run the echo $? command to check whether these commands executed successfully. We can also mention the complete location of the file to be saved in these command, as shown in the following example: raspistill -o /home/pi/book/output/cam_module_pic.jpg Just like fswebcam, raspistill can be used to record the timelapse sequence. In our timelapse shell script, replace the line that contains fswebcam with the appropriate raspistill command to capture the timelapse sequence and use mencoder again to create the video. This is left as an exercise for the readers. Now, let's take a look at the images taken with the Pi camera under different lighting conditions. The following is the image with normal lighting and the backlight: The following is the image with only the backlight: The following is the image with normal lighting and no backlight: For NoIR camera usage in the night under low light conditions, use IR illuminator light for better results. You can get it online. A typical off-the-shelf LED IR illuminator suitable for our purpose will look like the one shown here: Using picamera in Python with the Pi Camera module picamera is a Python package that provides a programming interface to the Pi Camera module. The most recent version of raspbian has picamera preinstalled. If you do not have it installed, you can install it using: sudo apt-get install python-picamera The following program quickly demonstrates the basic usage of the picamera module to capture an image: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(5) cam.capture('/home/pi/book/output/still.jpg') We have to import time and picamera modules first. cam.start_preview()will start the preview, and time.sleep(5) will wait for 5 seconds before cam.capture() captures and saves image in the specified file. There is a built-in function in picamera for timelapse photography. The following program demonstrates its usage: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(3) for count, imagefile in enumerate(cam.capture_continuous ('/home/pi/book/output/image{counter: 02d}.jpg')): print 'Capturing and saving ' + imagefile time.sleep(1) if count == 10: break In the preceding code, cam.capture_continuous()is used to capture the timelapse sequence using the Pi camera module. Checkout more examples and API references for the picamera module at http://picamera.readthedocs.org/. The Pi camera versus the webcam Now, after using the webcam and the Pi camera, it's a good time to understand the differences, the pros, and the cons of using these. The Pi camera board does not use a USB port and is directly interfaced to the Pi. So, it provides better performance than a webcam in terms of the frame rate and resolution. We can directly use the picamera module in Python to work on images and videos. However, the Pi camera cannot be used with any other computer. A webcam uses an USB port for interface, and because of that, it can be used with any computer. However, compared to the Pi camera its performance, it is lower in terms of the frame rate and resolution. Summary In this article, we learned how to use a webcam and the Pi camera. We also learned how to use utilities such as fswebcam, avconv, raspistill, raspivid, mencoder, and omxplayer. We covered how to use crontab. We used the Python picamera module to programmatically work with the Pi camera board. Finally, we compared the Pi camera and the webcam. We will be reusing all the code examples and concepts for some real-life projects soon. Resources for Article: Further resources on this subject: Introduction to the Raspberry Pi's Architecture and Setup [article] Raspberry Pi LED Blueprints [article] Hacking a Raspberry Pi project? Understand electronics first! [article]
Read more
  • 0
  • 0
  • 49404

article-image-elastic-load-balancing
Packt
09 Feb 2016
21 min read
Save for later

Elastic Load Balancing

Packt
09 Feb 2016
21 min read
In this article by Yohan Wadia, the author of the book AWS Administration – The Definitive Guide, we are going continue where we last dropped off and introduce an amazing and awesome concept called as Auto Scaling! AWS has been one of the first Public Cloud providers to provide this feature and really it is something that you must try out and use in your environments! This chapter will teach you the basics of Auto Scaling, its concepts and terminologies, and even how to create an auto scaled environment using AWS. It will also cover Amazon Elastic Load Balancers and how you can use them in conjuncture with Auto Scaling to manage your applications more effectively! So without wasting any more time, let's first get started by understanding what Auto Scaling is and how it actually works! (For more resources related to this topic, see here.) An overview of Auto Scaling We have been talking about AWS and the concept of dynamic scalability a.k.a. Elasticity in general throughout this book; well now is the best time to look at it in depth with the help of Auto Scaling! Auto Scaling basically enables you to scale your compute capacity (EC2 instances) either up or down, depending on the conditions you specify. These conditions could be as simple as a number that maintains the count of your EC2 instances at any given time, or even complex conditions that measures the load and performance of your instances such as CPU utilization, memory utilization, and so on. But a simple question that may arise here is why do I even need Auto Scaling? Is it really that important? Let's look at a dummy application's load and performance graph to get a better understanding of things, let's take a look at the following screenshot: The graph to the left depicts the traditional approach that is usually taken to map an application's performance requirements with a fixed infrastructure capacity. Now to meet this application's unpredictable performance requirement, you would have to plan and procure additional hardware upfront, as depicted by the red line. And since there is no guaranteed way to plan for unpredictable workloads, you generally end up procuring more than you need. This is a standard approach employed by many businesses and it doesn't come without its own sets of problems. For example, the region highlighted in red is when most of the procured hardware capacity is idle and wasted as the application simply does not have that high a requirement. Whereas there can be cases as well where the procured hardware simply did not match the application's high performance requirements, as shown by the green region. All these issues, in turn, have an impact on your business, which frankly can prove to be quite expensive. That's where the elasticity of a Cloud comes into play. Rather than procuring at the nth hour and ending up with wasted resources, you grow and shrink your resources dynamically as per your application's requirements, as depicted in the graph on the right. This not only helps you in saving overall costs but also makes your application's management a lot more easy and efficient. And don't worry if your application does not have an unpredictable load pattern! Auto Scaling is designed to work with both predictable and unpredictable workloads so that no matter what application you may have, you can also be rest assured that the required compute capacity is always going to be made available for use when required. Keeping that in mind, let us summarize some of the benefits that AWS Auto Scaling provides: Cost Savings: By far the biggest advantage provided by Auto Scaling, you can actually gain a lot of control over the deployment of your instances as well as costs by launching instances only when they are needed and terminating them when they aren't required. Ease of Use: AWS provides a variety of tools using which you can create and manage your Auto Scaling such as the AWS CLI and even using the EC2 Management Dashboard. Auto Scaling can be programmatically created and managed via a simple and easy to use web service API as well. Scheduled Scaling Actions: Apart from scaling instances as per a given policy, you can additionally even schedule scaling actions that can be executed in the future. This type of scaling comes in handy when your application's workload patterns are predictable and well known in advance. Geographic Redundancy and Scalability: AWS Auto Scaling enables you to scale, distribute, as well as load balance your application automatically across multiple Availability Zones within a given region. Easier Maintenance and Fault Tolerance: AWS Auto Scaling replaces unhealthy instances automatically based on predefined alarms and threshold. With these basics in mind, let us understand how Auto Scaling actually works out in AWS. Auto scaling components To get started with Auto Scaling on AWS, you will be required to work with three primary components, each described briefly as follows. Auto scaling group An Auto Scaling Group is a core component of the Auto Scaling service. It is basically a logical grouping of instances that share some common scaling characteristics between them. For example, a web application can contain a set of web server instances that can form one Auto Scaling Group and another set of application server instances that become a part of another Auto Scaling Group and so on. Each group has its own set of criteria specified that includes the minimum and maximum number of instances that the Group should have along with the desired number of instances that the group must have at all times. Note: The desired number of instances is an optional field in an Auto Scaling Group. If the desired capacity value is not specified, then the Auto Scaling Group will consider the minimum number of instance value as the desired value instead. Auto Scaling Groups are also responsible for performing periodic health checks on the instances contained within them. An instance with a degraded health is then immediately swapped out and replaced by a new one by the Auto Scaling Group, thus ensuring that each of the instances within the Group work at optimum levels. Launch configurations A Launch Configuration is a set of blueprint statements that the Auto Scaling Group uses to launch instances. You can create a single Launch Configuration and use it with multiple Auto Scaling Groups; however, you can only associate one Launch Configuration with a single Auto Scaling Group at a time. What does a Launch Configuration contain? Well to start off with, it contains the AMI ID using which Auto Scaling launches the instances in the Auto Scaling Group. It also contains additional information about your instances such as instance type, the security group it has to be associated with, block device mappings, key pairs, and so on. An important thing to note here is that once you create a Launch Configuration, there is no way you can edit it again. The only way to make changes to a Launch Configuration is by creating a new one in its place and associating that with the Auto Scaling Group. Scaling plans With your Launch Configuration created, the final step left is to create one or more Scaling Plans. Scaling Plans describe how the Auto Scaling Group should actually scale. There are three scaling mechanisms you can use with your Auto Scaling Groups, each described as follows: Manual Scaling: Manual Scaling by far is the simplest way of scaling your resources. All you need to do here is specify a new desired number of instances value or change the minimum or maximum number of instances in an Auto Scaling Group and the rest is taken care of by the Auto Scaling service itself. Scheduled Scaling: Scheduled Scaling is really helpful when it comes to scaling resources based on a particular time and date. This method of scaling is useful when the application's load patterns are highly predictable, and thus you know exactly when to scale up or down. For example, an application that process a company's payroll cycle is usually load intensive during the end of each month, so you can schedule the scaling requirements accordingly. Dynamic Scaling: Dynamic Scaling or scaling on demand is used when the predictability of your application's performance is unknown. With Dynamic Scaling, you generally provide a set of scaling policies using some criteria, for example, scale the instances in my Auto Scaling Group by 10 when the average CPU Utilization exceeds 75 percent for a period of 5 minutes. Sounds familiar right? Well that's because these dynamic scaling policies rely on Amazon CloudWatch to trigger scaling events. CloudWatch monitors the policy conditions and triggers the auto scaling events when certain thresholds are beached. In either case, you will require a minimum of two such scaling polices: one for scaling in (terminating instances) and one for scaling out (launching instances). Before we go ahead and create our first Auto Scaling activity, we need to understand one additional AWS service that will help us balance and distribute the incoming traffic across our auto scaled EC2 instances. Enter the Elastic Load Balancer! Introducing the Elastic Load Balancer Elastic Load Balancer or ELB is a web service that allows you to automatically distribute incoming traffic across a fleet of EC2 instances. In simpler terms, an ELB acts as a single point of contact between your clients and the EC2 instances that are servicing them. The clients query your application via the ELB; thus, you can easily add and remove the underlying EC2 instances without having to worry about any of the traffic routing or load distributions. It is all taken care of by the ELB itself! Coupled with Auto Scaling, ELB provides you with a highly resilient and fault tolerant environment to host your applications. While the Auto Scaling service automatically removes any unhealthy EC2 instances from its Group, the ELB automatically reroutes the traffic to some other healthy instance. Once a new healthy instance is launched by the Auto Scaling service, ELB will once again re-route the traffic through it and balance out the application load as well. But the work of the ELB doesn't stop there! An ELB can also be used to safeguard and secure your instances by enforcing encryption and by utilizing only HTTPS and SSL connections. Keeping these points in mind, let us look at how an ELB actually works. Well to begin with, when you create an ELB in a particular AZ, you are actually spinning up one or more ELB nodes. Don't worry, you cannot physically see these nodes nor perform any much actions on them. They are completely managed and looked after by AWS itself. This node is responsible for forwarding the incoming traffic to the healthy instances present in that particular AZ. Now here's the fun part! If you configure the ELB to work across multiple AZs and assume that one entire AZ goes down or the instances in that particular AZ become unhealthy for some reason, then the ELB will automatically route traffic to the healthy instances present in the second AZ. How does it do the routing? The ELB by default is provided with a Public DNS name, something similar to MyELB-123456789.region.elb.amazonaws.com. The clients send all their requests to this particular Public DNS name. The AWS DNS Servers then resolve this public DNS name to the public IP addresses of the ELB nodes. Each of the nodes has one or more Listeners configured on them which constantly checks for any incoming connections. Listeners are nothing but a process that are configured with a combination of protocol, for example, HTTP and a port, for example, 80. The ELB node that receives the particular request from the client then routes the traffic to a healthy instance using a particular routing algorithm. If the Listener was configured with a HTTP or HTTPS protocol, then the preferred choice of routing algorithm is the least outstanding requests routing algorithm. Note: If you had configured your ELB with a TCP listener, then the preferred routing algorithm is Round Robin. Confused? Well don't be as most of these things are handled internally by the ELB itself. You don't have to configure the ELB nodes nor the routing tables. All you need to do is set up the Listeners in your ELB and point all client requests to the ELB's Public DNS name, that's it! Keeping these basics in mind, let us go ahead and create our very first ELB! Creating your first Elastic Load Balancer Creating and setting up an ELB is a fairly easy and straightforward process provided you have planned and defined your Elastic Load Balancer's role from the start. The current version of ELB supports HTTP, HTTPS, TCP, as well as SSL connection protocols; however, for the sake of simplicity, we will be creating a simple ELB for balancing HTTP traffic only. I'll be using the same VPC environment that we have been developing since Chapter 5, Building your Own Private Clouds using Amazon VPC; however, you can easily substitute your own infrastructure in this place as well. To access the ELBDashboard, you will have to first access the EC2ManagementConsole. Next, from the navigation pane, select the LoadBalancers option, as shown in the following screenshot. This will bring up the ELBDashboard as well using which you can create and associate your ELBs. An important point to note here is that although ELBs are created using this particular portal, you can, however, use them for both your EC2 and VPC environments. There is no separate portal for creating ELBs in a VPC environment. Since this is our first ELB, let us go ahead and select the CreateLoadBalancer option. This will bring up a seven-step wizard using which you can create and customize your ELBs. Step 1 – Defining Load Balancer To begin with, provide a suitable name for your ELB in the LoadBalancername field. In this case, I have opted to stick to my naming convention and named the ELB as US-WEST-PROD-LB-01. Next up, select the VPC option in which you wish to deploy your ELB. Again, I have gone ahead and selected the US-WEST-PROD-1 (192.168.0.0/16) VPC that we created in Chapter 5, Building your Own Private Clouds using Amazon VPC. You can alternatively select your own VPC environment or even select a standalone EC2 environment if it is available. Do not check the Create an internal load balancer option as in this scenario we are creating an Internet-facing ELB for our Web Server instances. There are two types of ELBs that you can create and use based on your requirements. The first is an Internet-facing Load Balancer, which is used to balance out client requests that are inbound from the Internet. Ideally, such Internet-facing load balancers connect to the Public Subnets of a VPC. Similarly, you also have something called as Internal Load Balancers that connect and route traffic to your Private Subnets. You can use a combination of these depending on your application's requirements and architecture, for example, you can have one Internet-facing ELB as your application's main entry point and an internal ELB to route traffic between your Public and Private Subnets; however, for simplicity, let us create an Internet-facing ELB for now. With these basic settings done, we now provide our ELB's Listeners. A Listener is made up of two parts: a protocol and port number for your frontend connection (between your Client and the ELB), and a protocol and a port number for a backend connection (between the ELB and the EC2 instances). In the ListenerConfiguration section, select HTTP from the Load Balancer Protocol dropdown list and provide the port number 80 in the Load Balancer Port field, as shown in the following screenshot. Provide the same protocol and port number for the Instance Protocol and Instance Port field as well. What does this mean? Well this listener is now configured to listen on the ELB's external port (Load Balancer Port) 80 for any client's requests. Once it receives the requests, it will then forward it out to the underlying EC2 instances using the Instance Port, which in this case is port 80 as well. There is no thumb rule as such that both the port values must match; in fact, it is actually a good practice to keep them different. Although your ELB can listen on port 80 for any client's requests, it can use any ports within the range of 1-65,535 for forwarding the request to the instances. You can optionally add additional listeners to your ELB such as a listener for the HTTPS protocol running on port 443 as well; however, that is something that I will leave you do to later. The final configuration item left in step 1 is where you get to select the Subnets option to be associated with your new Load Balancer. In my case, I have gone ahead and created a set of subnets each in two different AZs so as to mimic a high availability scenario. Select any particular Subnets and add them to your ELB by selecting the adjoining + sign. In my case, I have selected two Subnets, both belonging to the web server instances; however, both present in two different AZs. Note: You can select a single Subnet as well; however, it is highly recommended that you go for a high available architecture, as described earlier. Once your subnets are added, click on Next: Assign Security Groups to continue over to step 2. Step 2 – Assign Security Groups Step 2 is where we get to assign our ELB with a Security Group. Now here a catch: You will not be prompted for a Security Group if you are using an EC2-Classic environment for your ELB. This Security Group is only necessary for VPC environments and will basically allow the port you designated for inbound traffic to pass through. In this case, I have created a new dedicated Security Group for the ELB. Provide a suitable Security group name as well as Description, as shown in the preceding screenshot. The new security group already contains a rule that allows traffic to the port that you configured your Load Balancer to use, in my case its port 80. Leave the rule to its default value and click on Next: Configure Security Settingsto continue. Step 3 – Configure Security Settings This is an optional page that basically allows you to secure your ELB by using either the HTTPS or the SSL protocol for your frontend connection. But since we have opted for a simple HTTP-based ELB, we can ignore this page for now. Click on Next: Configure Health Check to proceed on to the next step. Step 4 – Configure Health Check Health Checks are a very important part of an ELB's configuration and hence you have to be extra cautious when setting it up. What are Health Checks? To put it in simple terms, these are basic tests that the ELB conducts to ensure that your underlying EC2 instances are healthy and running optimally. These tests include simple pings, attempted connections, or even some send requests. If the ELB senses either of the EC2 instances in an unhealthy state, it immediately changes its Health Check Status to OutOfService. Once the instance is marked as OutOfService, the ELB no longer routes any traffic to it. The ELB will only start sending traffic back to the instance only if its Health Check State changes to InService again. To configure the Health Checks for your ELB, fill in the following information as described here: Ping Protocol: This field indicates which protocol the ELB should use to connect to your EC2 instances. You can use the TCP, HTTP, HTTPS, or the SSL options; however, for simplicity, I have selected the HTTP protocol here. Ping Port: This field is used to indicate the port which the ELB should use to connect to the instance. You can supply any port value from the range 1 to 65,535; however, since we are using the HTTP protocol, I have opted to stick with the default value of port 80. This port value is really essential as the ELB will periodically ping the EC2 instances on this port number. If any instance does not reply back in a timely fashion, then that instance will be deemed unhealthy by the ELB. Ping Path: This value is usually used for the HTTP and HTTPS protocols. The ELB sends a simple GET request to the EC2 instances based on the Ping Port and Ping Path. If the ELB receives a response other than an "OK," then that particular instance is deemed to be unhealthy by the ELB and it will no longer route any traffic to it. Ping Paths generally are set with a forward slash "/", which indicates the default home page of a web server. However, you can also use a /index.html or a /default.html value as you seem fit. In my case, I have provided the /index.php value as my dummy web application is actually a PHP app. Besides the Ping checks, there are also a few advanced configuration details that you can configure based on your application's health check needs: Response Time: The Response Time is the time the ELB has to wait in order to receive a response. The default value is 5 seconds with a max value up to 60 seconds. Let's take a look at the following screenshot: Health Check Interval: This field indicates the amount of time (in seconds) the ELB waits between health checks of an individual EC2 instance. The default value is 30 seconds; however, you can specify a max value of 300 seconds as well. Unhealthy Threshold: This field indicates the number of consecutive failed health checks an ELB must wait before declaring an instance unhealthy. The default value is 2 with a max threshold value of 10. Healthy Threshold: This field indicates the number of consecutive successful health checks an ELB must wait before declaring an instance healthy. The default value is 2 with a max threshold value of 10. Once you have provided your values, go ahead and select the Next: Add EC2 Instances option. Step 5 – Add EC2 Instances In this section of the Wizard, you can select any running instance from your Subnets to be added and registered with the ELB. But since we are setting this particular ELB for use with Auto Scaling, we will leave this section for now. Click on Next: Add Tags to proceed with the wizard. Step 6 – Add Tags We already know the importance of tagging our AWS resources, so go ahead and provide a suitable tag for categorizing and identifying your ELB. Note that you can always add/edit and remove tags at a later time as well using the ELB Dashboard. With the Tags all set up, click on Review and Create. Step 7 – Review and Create The final steps of our ELB creation wizard is where we simply review our ELB's settings, including the Health Checks, EC2 instances, Tags, and so on. Once reviewed, click on Create to begin your ELB's creation and configuration. The ELB takes a few seconds to get created, but once it's ready, you can view and manage it just like any other AWS resource using the ELBDashboard, as shown in the following screenshot: Select the newly created ELB and view its details in the Description tab. Make a note of the ELB's public DNS Name as well. You can optionally even view the Status as well as the ELBScheme (whether Internet-facing or internal) using the Description tab. You can also view the ELB's Health Checks as well as the Listeners configured with your ELB. Before we proceed with the next section of this chapter, here are a few important pointers to keep in mind when working with ELB. Firstly, the configurations that we performed on our ELB are all very basic and will help you to get through the basics; however, ELB also provides us with additional advanced configuration options such as Cross-Zone Load Balancing, Proxy Protocols, and Sticky Sessions, and so on, which can all be configured using the ELB Dashboard. To know more about these advanced settings, refer to http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-configure-load-balancer.html. Second important thing worth mentioning is the ELB's costs. Although it is free (Terms and Conditions apply) to use under the Free Tier eligibility, ELBs are charged approximately $0.025 per hour used. There is a nominal charge on the data transferring charge as well, which is approximately $0.008 per GB of data processed. Summary I really hope that you have got to learn about Amazon ELB as much as possible. We talked about the importance of Auto Scaling and how it proves to be super beneficial when compared to the traditional mode of scaling infrastructure. We then learnt a bit about AWS Auto Scaling and its core components. Next, we learnt about a new service offering called as Elastic Load Balancers and saw how easy it is to deploy one for our own use. Resources for Article: Further resources on this subject: Achieving High-Availability on AWS Cloud [article] Amazon Web Services [article] Patterns for Data Processing [article]
Read more
  • 0
  • 0
  • 24464

article-image-running-simpletest-and-phpunit
Packt
09 Feb 2016
4 min read
Save for later

Running Simpletest and PHPUnit

Packt
09 Feb 2016
4 min read
In this article by Matt Glaman, the author of the book Drupal 8 Development Cookbook. This article will kick off with an introduction to getting a Drupal 8 site installed and running. (For more resources related to this topic, see here.) Running Simpletest and PHPUnit Drupal 8 ships with two testing suites. Previously Drupal only supported Simpletest. Now there are PHPUnit tests as well. In the official change record, PHPUnit was added to provide testing without requiring a full Drupal bootstrap, which occurs with each Simpletest test. Read the change record here: https://www.drupal.org/node/2012184. Getting ready Currently core comes with Composer dependencies prepackaged and no extra steps need to be taken to run PHPUnit. This article will demonstrate how to run tests the same way that the QA testbot on Drupal.org does. The process of managing Composer dependencies may change, but is currently postponed due to Drupal.org's testing and packaging infrastructure. Read more here: https://www.drupal.org/node/1475510. How to do it… First enable the Simpletest module. Even though you might only want to run PHPUnit, this is a soft dependency for running the test runner script. Open a command line terminal and navigate to your Drupal installation directory and run the following to execute all available PHPUnit tests: php core/scripts/run-tests.sh PHPUnit Running Simpletest tests required executing the same script, however instead of passing PHPUnit as the argument, you must define the URL option and tests option: php core/scripts/run-tests.sh --url http://localhost --all Review the test output! How it works… The run-tests.sh script has been shipped with Drupal since 2008, then named run-functional-tests.php. The command interacts with the test suites in Drupal to run all or specific tests and sets up other configuration items. We will highlight some of the useful options below: --help: This displays the items covered in the following bullets --list: This displays the available test groups that can be run --url: This is required unless the Drupal site is accessible through http://localhost:80 --sqlite: This allows you to run Simpletest without having to have Drupal installed --concurrency: This allows you to define how many tests run in parallel There's more… Is run-tests a shell script? The run-tests.sh isn't actually a shell script. It is a PHP script, which is why you must execute it with PHP. In fact, within core/scripts each file is a PHP script file meant to be executed from the command line. These scripts are not intended to be run through a web server, which is one of the reasons for the .sh extension. There are issues with discovered PHP across platforms that prevent providing a shebang line to allow executing the file as a normal bash or bat script. For more info view this Drupal.org issue: https://www.drupal.org/node/655178. Running Simpletest without Drupal installed With Drupal 8, Simpletest can be run off SQLlite and no longer requires an installed database. This can be accomplished by passing the sqlite and dburl options to the run-tests.sh script. This requires the PHP SQLite extension to be installed. Here is an example adapted from the DrupalCI test runner for Drupal core: php core/scripts/run-tests.sh --sqlite /tmp/.ht.sqlite --die-on-fail --dburl sqlite://tmp/.ht.sqlite --all Combined with the built in PHP web server for debugging you can run Simpletest without a full-fledged environment. Running specific tests Each example thus far has used the all option to run every Simpletest available. There are various ways to run specific tests: --module: This allows you to run all the tests of a specific module --class: This runs a specific path, identified by full namespace path --file: This runs tests from a specified file --directory: This run tests within a specified directory Previously in Drupal tests were grouped inside of module.test files, which is where the file option derives from. Drupal 8 utilizes the PSR-4 autoloading method and requires one class per file. DrupalCI With Drupal 8 came a new initiative to upgrade the testing infrastructure on Drupal.org. The outcome was DrupalCI. DrupalCI is open source and can be downloaded and run locally. The project page for DrupalCI is: https://www.drupal.org/project/drupalci. The test bot utilizes Docker and can be downloaded locally to run tests. The project ships with a Vagrant file to allow it to be run within a virtual machine or locally. Learn more on the testbot's project page: https://www.drupal.org/project/drupalci_testbot. See also PHPUnit manual: https://phpunit.de/manual/4.8/en/writing-tests-for-phpunit.html Drupal PHPUnit handbook: https://drupal.org/phpunit Simpletest from the command line: https://www.drupal.org/node/645286 Resources for Article: Further resources on this subject: Installing Drupal 8 [article] What is Drupal? [article] Drupal 7 Social Networking: Managing Users and Profiles [article]
Read more
  • 0
  • 0
  • 11389

Packt
09 Feb 2016
13 min read
Save for later

CSS Properties – Part 1

Packt
09 Feb 2016
13 min read
In this article written by Joshua Johanan, Talha Khan and Ricardo Zea, authors of the book Web Developer's Reference Guide, the authors wants to state that "CSS properties are characteristics of an element in a markup language (HTML, SVG, XML, and so on) that control their style and/or presentation. These characteristics are part of a constantly evolving standard from the W3C." (For more resources related to this topic, see here.) A basic example of a CSS property is border-radius: input { border-radius: 100px; } There is an incredible amount of CSS properties, and learning them all is virtually impossible. Adding more into this mix, there are CSS properties that need to be vendor prefixed (-webkit-, -moz-, -ms-, and so on), making this equation even more complex. Vendor prefixes are short pieces of CSS that are added to the beginning of the CSS property (and sometimes, CSS values too). These pieces of code are directly related to either the company that makes the browser (the "vendor") or to the CSS engine of the browser. There are four major CSS prefixes: -webkit-, -moz-, -ms- and -o-. They are explained here: -webkit-: This references Safari's engine, Webkit (Google Chrome and Opera used this engine in the past as well) -moz-: This stands for Mozilla, which creates Firefox -ms-: This stands for Microsoft, which creates Internet Explorer -o-: This stands for Opera, but only targets old versions of the browser Google Chrome and Opera both support the -webkit- prefix. However, these two browsers do not use the Webkit engine anymore. Their engine is called Blink and is developed by Google. A basic example of a prefixed CSS property is column-gap: .column { -webkit-column-gap: 5px; -moz-column-gap: 5px; column-gap: 5px; } Knowing which CSS properties need to be prefixed is futile. That's why, it's important to keep a constant eye on CanIUse.com. However, it's also important to automate the prefixing process with tools such as Autoprefixer or -prefix-free, or mixins in preprocessors, and so on. However, vendor prefixing isn't in the scope of the book, so the properties we'll discuss are without any vendor prefixes. If you want to learn more about vendor prefixes, you can visit Mozilla Developer Network (MDN) at http://tiny.cc/mdn-vendor-prefixes. Let's get the CSS properties reference rolling. Animation Unlike the old days of Flash, where creating animations required third-party applications and plugins, today, we can accomplish practically the same things with a lot less overhead, better performance, and greater scalability, all through CSS only. Forget plugins and third-party software! All we need is a text editor, some imagination, and a bit of patience to wrap our heads around some of the animation concepts CSS brings to our plate. Base markup and CSS Before we dive into all the animation properties, we will use the following markup and animation structure as our base: HTML: <div class="element"></div> CSS: .element { width: 300px; height: 300px; } @keyframes fadingColors { 0% { background: red; } 100% { background: black; } } In the examples, we will only see the element rule since the HTML and @keyframes fadingColors will remain the same. The @keyframes declaration block is a custom animation that can be applied to any element. When applied, the element's background will go from red to black. Ok, let's do this. animation-name The animation-name CSS property is the name of the @keyframes at-rule that we want to execute, and it looks like this: animation-name: fadingColors; Description In the HTML and CSS base example, our @keyframes at-rule had an animation where the background color went from red to black. The name of that animation is fadingColors. So, we can call the animation like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; } This is a valid rule using the longhand. There are clearly no issues with it at all. The thing is that the animation won't run unless we add animation-duration to it. animation-duration The animation-duration CSS property defines the amount of time the animation will take to complete a cycle, and it looks like this: animation-duration: 2s; Description We can specify the units either in seconds using s or in milliseconds using ms. Specifying a unit is required. Specifying a value of 0s means that the animation should actually never run. However, since we do want our animation to run, we will use the following lines of code: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } As mentioned earlier, this will make a box go from its red background to black in 2 seconds, and then stop. animation-iteration-count The animation-iteration-count CSS property defines the number of times the animation should be played, and it looks like this: animation-iteration-count: infinite;Description Here are two values: infinite and a number, such as 1, 3, or 0.5. Negative numbers are not allowed. Add the following code to the prior example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; } This will make a box go from its red background to black, start over again with the red background and go to black, infinitely. animation-direction The animation-direction CSS property defines the direction in which the animation should play after the cycle, and it looks like this: animation-direction: alternate; Description There are four values: normal, reverse, alternate, and alternate-reverse. normal: It makes the animation play forward. This is the default value. reverse: It makes the animation play backward. alternate: It makes the animation play forward in the first cycle, then backward in the next cycle, then forward again, and so on. In addition, timing functions are affected, so if we have ease-out, it gets replaced by ease-in when played in reverse. We'll look at these timing functions in a minute. alternate-reverse: It's the same thing as alternate, but the animation starts backward, from the end. In our current example, we have a continuous animation. However, the background color has a "hard stop" when going from black (end of the animation) to red (start of the animation). Let's create a more 'fluid' animation by making the black background fade into red and then red into black without any hard stops. Basically, we are trying to create a "pulse-like" effect: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; } animation-delay The animation-delay CSS property allows us to define when exactly an animation should start. This means that as soon as the animation has been applied to an element, it will obey the delay before it starts running. It looks like this: animation-delay: 3s; Description We can specify the units either in seconds using s or in milliseconds using ms.Specifying a unit is required. Negative values are allowed. Take into consideration that using negative values means that the animation should start right away, but it will start midway into the animation for the opposite amount of time as the negative value. Use negative values with caution. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; } This will make the animation start after 3 seconds have passed. animation-fill-mode The animation-fill-mode CSS property defines which values are applied to an element before and after the animation. Basically, outside the time, the animation is being executed. It looks like this: animation-fill-mode: none; Description There are four values: none, forwards, backwards, and both. none: No styles are applied before or after the animation. forwards: The animated element will retain the styles of the last keyframe. This the most used value. backwards: The animated element will retain the styles of the first keyframe, and these styles will remain during the animation-delay period. This is very likely the least used value. both: The animated element will retain the styles of the first keyframe before starting the animation and the styles of the last keyframe after the animation has finished. In many cases, this is almost the same as using forwards. The prior properties are better used in animations that have an end and stop. In our example, we're using a fading/pulsating animation, so the best property to use is none. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; } animation-play-state The animation-play-state CSS property defines whether an animation is running or paused, and it looks like this: animation-play-state: running; Description There are two values: running and paused. These values are self-explanatory. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; } In this case, defining animation-play-state as running is redundant, but I'm listing it for purposes of the example. animation-timing-function The animation-timing-function CSS property defines how an animation's speed should progress throughout its cycles, and it looks like this: animation-timing-function: ease-out; There are five predefined values, also known as easing functions, for the Bézier curve (we'll see what the Bézier curve is in a minute): ease, ease-in, ease-out, ease-in-out, and linear. ease The ease function Sharply accelerates at the beginning and starts slowing down towards the middle of the cycle, its syntax is as follows: animation-timing-function: ease; ease-in The ease-in function starts slowly accelerating until the animation sharply ends, its syntax is as follows: animation-timing-function: ease-in; ease-out The ease-out function starts quickly and gradually slows down towards the end: animation-timing-function: ease-out; ease-in-out The ease-in-out function starts slowly and it gets fast in the middle of the cycle. It then starts slowing down towards the end, its syntax is as follows: animation-timing-function:ease-in-out; linear The linear function has constant speed. No accelerations of any kind happen, its syntax is as follows: animation-timing-function: linear; Now, the easing functions are built on a curve named the Bézier curve and can be called using the cubic-bezier() function or the steps() function. cubic-bezier() The cubic-bezier() function allows us to create custom acceleration curves. Most use cases can benefit from the already defined easing functions we just mentioned (ease, ease-in, ease-out, ease-in-out and linear), but if you're feeling adventurous, cubic-bezier() is your best bet. Here's how a Bézier curve looks like: Parameters The cubic-bezier() function takes four parameters as follows: animation-timing-function: cubic-bezier(x1, y1, x2, y2); X and Y represent the x and y axes. The numbers 1 and 2 after each axis represent the control points. 1 represents the control point starting on the lower left, and 2 represent the control point on the upper right. Description Let's represent all five predefined easing functions with the cubic-bezier() function: ease: animation-timing-function: cubic-bezier(.25, .1, .25, 1); ease-in: animation-timing-function: cubic-bezier(.42, 0, 1, 1); ease-out: animation-timing-function: cubic-bezier(0, 0, .58, 1); ease-in-out: animation-timing-function: cubic-bezier(.42, 0, .58, 1); linear: animation-timing-function: cubic-bezier(0, 0, 1, 1); Not sure about you, but I prefer to use the predefined values. Now, we can start tweaking and testing each value to the decimal, save it, and wait for the live refresh to do its thing. However, that's too much time wasted testing if you ask me. The amazing Lea Verou created the best web app to work with Bézier curves. You can find it at cubic-bezier.com. This is by far the easiest way to work with Bézier curves. I highly recommend this tool. The Bézier curve image showed earlier was taken from the cubic-bezier.com website. Let's add animation-timing-function to our example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } steps() The steps() timing function isn't very widely used, but knowing how it works is a must if you're into CSS animations. It looks like this: animation-timing-function: steps(6); This function is very helpful when we want our animation to take a defined number of steps. After adding a steps() function to our current example, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6); } This makes the box take six steps to fade from red to black and vice versa. Parameters There are two optional parameters that we can use with the steps() function: start and end. start: This will make the animation run at the beginning of each step. This will make the animation start right away. end: This will make the animation run at the end of each step. This is the default value if nothing is declared. This will make the animation have a short delay before it starts. Description After adding the parameters to the CSS code, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6, start); } Granted, in our example, is not very noticeable. However, you can see it more clear in this pen form Louis Lazarus when hovering over the boxes, at http://tiny.cc/steps-timing-function. Here's an image taken from Stephen Greig's article on Smashing Magazine, Understanding CSS Timing Functions, that explains start and end from the steps() function: Also, there are two predefined values for the steps() function: step-start and step-end. step-start: Is the same thing as steps(1, start). It means that every change happens at the beginning of each interval. step-end: Is the same thing as steps(1, end). It means that every change happens at the end of each interval. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: step-end; } animation The animation CSS property is the shorthand for animation-name, animation-duration, animation-timing-function, animation-delay, animation-iteration-count, animation-direction, animation-fill-mode, and animation-play-state. It looks like this: animation: fadingColors 2s; Description For a simple animation to work, we need at least two properties: name and duration. If you feel overwhelmed by all these properties, relax. Let me break them down for you in simple bits. Using the animation longhand, the code would look like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } Using the animation shorthand, which is the recommended syntax, the code would look like this: CSS: .element { width: 300px; height: 300px; animation: fadingColors 2s; } This will make a box go from its red background to black in 2 seconds, and then stop. Final CSS code Let's see how all the animation properties look in one final example showing both the longhand and shorthand styles. Longhand style .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } Shorthand style .element { width: 300px; height: 300px; animation: fadingColors 2s infinite alternate 3s none running ease-out; } The animation-duration property will always be considered first rather than animation-delay. All other properties can appear in any order within the declaration. You can find a demo in CodePen at http://tiny.cc/animation. Summary In this article we learned how to add animations in our web project, also we learned about different properties, in detail, that can be used to animate our web project along with their description. Resources for Article: Further resources on this subject: Using JavaScript with HTML[article] Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article]
Read more
  • 0
  • 0
  • 3239

article-image-introduction-docker
Packt
09 Feb 2016
4 min read
Save for later

Introduction to Docker

Packt
09 Feb 2016
4 min read
In this article by Rajdeep Dua, the author of the book Learning Docker Networking, we will look at an introduction of Docker networking and its components. (For more resources related to this topic, see here.) Docker is a lightweight containerization technology that has gathered enormous interest in recent years. It neatly bundles various Linux kernel features and services, such as namespaces, cgroups, SELinux, and AppArmor profiles, over union filesystems such as AUFS and BTRFS in order to make modular images. These images provide a highly configurable virtualized environment for applications and follow a write once, run anywhere workflow. Applications can be as simple as running a process to a highly scalable and distributed one. Therefore, there is a need for powerful networking elements that can support various complex use cases. Networking and Docker Each Docker container has its own network stack, and this is due to the Linux kernel's net namespace, where a new net namespace for each container is instantiated and cannot be seen from outside the container or from other containers. Docker networking is powered by the following network components and services. Linux bridges These are L2/MAC learning switches built into the kernel and are to be used for forwarding. Open vSwitch This is an advanced bridge that is programmable and supports tunneling. NAT Network address translators are immediate entities that translate IP addresses and ports (SNAT, DNAT, and so on). IPtables This is a policy engine in the kernel used for managing packet forwarding, firewall, and NAT features. AppArmor/SELinux Firewall policies for each application can be defined with these. Various networking components can be used to work with Docker, providing new ways to access and use Docker-based services. As a result, we see a lot of libraries that follow a different approach to networking. Some of the prominent ones are Docker Compose, Weave, Kubernetes, Pipework, and Libnetwork. The following figure depicts the root ideas of Docker networking: Docker networking modes What's new in Docker networking? Docker networking is at a very nascent stage, and there are many interesting contributions from the developer community, such as Pipework, Weave, Clocker, and Kubernetes. Each of them reflects a different aspect of Docker networking. We will learn about them in later chapters. Docker, Inc. has also established a new project, where networking will be standardized. It is called libnetwork. Libnetwork implements the Container Network Model (CNM), which formalizes the steps required to provide networking for containers while providing an abstraction that can be used to support multiple network drivers. The CNM is built on three main components—sandbox, endpoint, and network. Sandbox A sandbox contains the configuration of a container's network stack. This includes management of the container's interfaces, routing table, and DNS settings. An implementation of a sandbox could be a Linux network namespace, a FreeBSD jail, or other similar concept. A sandbox may contain many endpoints from multiple networks. Endpoint An endpoint connects a sandbox to a network. An implementation of an endpoint could be a veth pair, an Open vSwitch internal port, or something similar. An endpoint can belong to only one network but may only belong to one Sandbox. Network A network is a group of endpoints that are able to communicate with each other directly. An implementation of a network could be a Linux bridge, a VLAN, and so on. Networks consist of many endpoints, as shown in the following diagram: The Docker CNM model The Docker CNM model The CNM provides the following contract between networks and containers: All containers on the same network can communicate freely with each other Multiple networks are the way to segment traffic between containers and should be supported by all drivers Multiple endpoints per container are the way to join a container to multiple networks An endpoint is added to a network sandbox to provide it with network connectivity Summary In this article, we learned about the essential components of Docker networking, which have evolved from coupling simple Docker abstractions and powerful network components such as Linux bridges and Open vSwitch. We also talked about the next generation of Docker networking, which is called libnetwork. Resources for Article: Further resources on this subject: Advanced Container Resource Analysis [article] Docker in Production [article] Elucidating the Game-changing Phenomenon of the Docker-inspired Containerization Paradigm [article]
Read more
  • 0
  • 0
  • 5268

article-image-testing-node-and-hapi
Packt
09 Feb 2016
22 min read
Save for later

Testing in Node and Hapi

Packt
09 Feb 2016
22 min read
In this article by John Brett, the author of the book Getting Started with Hapi.js, we are going to explore the topic of testing in node and hapi. We will look at what is involved in writing a simple test using hapi's test runner, lab, how to test hapi applications, techniques to make testing easier, and finally how to achieve the all-important 100% code coverage. (For more resources related to this topic, see here.) The benefits and importance of testing code Technical debt is developmental work that must be done before a particular job is complete, or else it will make future changes much harder to implement later on. A codebase without tests is a clear indication of technical debt. Let's explore this statement in more detail. Even very simple applications will generally comprise: Features, which the end user interacts with Shared services, such as authentication and authorization, that features interact with These will all generally depend on some direct persistent storage or API. Finally, to implement most of these features and services, we will use libraries, frameworks, and modules regardless of language. So, even for simpler applications, we have already arrived at a few dependencies to manage, where a change that causes a break in one place could possibly break everything up the chain. So let's take a common use case, in which a new version of one of your dependencies is released. This could be a new hapi version, a smaller library, your persistent storage engine, MySQL, MongoDB, or even an operating system or language version. SemVer, as mentioned previously, attempts to mitigate this somewhat, but you are taking someone at their word when they say that they have adhered to this correctly, and SemVer is not used everywhere. So, in the case of a break-causing change, will the current application work with this new dependency version? What will fail? What percentage of tests fail? What's the risk if we don't upgrade? Will support eventually be dropped, including security patches? Without a good automated test suite, these have to be answered by manual testing, which is a huge waste of developer time. Development progress stops here every time these tasks have to be done, meaning that these types of tasks are rarely done, building further technical debt. Apart from this, humans are proven to be poor at repetitive tasks, prone to error, and I know I personally don't enjoy testing manually, which makes me poor at it. I view repetitive manual testing like this as time wasted, as these questions could easily be answered by running a test suite against the new dependency so that developer time could be spent on something more productive. Now, let's look at a worse and even more common example: a security exploit has been identified in one of your dependencies. As mentioned previously, if it's not easy to update, you won't do it often, so you could be on an outdated version that won't receive this security update. Now you have to jump multiple versions at once and scramble to test them manually. This usually means many quick fixes, which often just cause more bugs. In my experience, code changes under pressure are what deteriorate the structure and readability in a codebase, lead to a much higher number of bugs, and are a clear sign of poor planning. A good development team will, instead of looking at what is currently available, look ahead to what is in beta and will know ahead of time if they expect to run into issues. The questions asked will be: Will our application break in the next version of Chrome? What about the next version of node? Hapi does this by running the full test suite against future versions of node in order to alert the node community of how planned changes will impact hapi and the node community as a whole. This is what we should all aim to do as developers. A good test suite has even bigger advantages when working in a team or when adding new developers to a team. Most development teams start out small and grow, meaning all the knowledge of the initial development needs to be passed on to new developers joining the team. So, how do tests lead to a benefit here? For one, tests are a great documentation on how parts of the application work for other members of a team. When trying to communicate a problem in an application, a failing test is a perfect illustration of what and where the problem is. When working as a team, for every code change from yourself or another member of the team, you're faced with the preceding problem of changing a dependency. Do we just test the code that was changed? What about the code that depends on the changed code? Is it going to be manual testing again? If this is the case, how much time in a week would be spent on manual testing versus development? Often, with changes, existing functionality can be broken along with new functionality, which is called regression. Having a good test suite highlights this and makes it much easier to prevent. These are the questions and topics that need to be answered when discussing the importance of tests. Writing tests can also improve code quality. For one, identifying dead code is much easier when you have a good testing suite. If you find that you can only get 90% code coverage, what does the extra 10% do? Is it used at all if it's unreachable? Does it break other parts of the application if removed? Writing tests will often improve your skills in writing easily testable code. Software applications usually grow to be complex pretty quickly—it happens, but we always need to be active in dealing with this, or software complexity will win. A good test suite is one of the best tools we have to tackle this. The preceding is not an exhaustive list on the importance or benefits of writing tests for your code, but hopefully it has convinced you of the importance of having a good testing suite. So, now that we know why we need to write good tests, let's look at hapi's test runner lab and assertion library code and how, along with some tools from hapi, they make the process of writing tests much easier and a more enjoyable experience. Introducing hapi's testing utilities The test runner in the hapi ecosystem is called lab. If you're not familiar with test runners, they are command-line interface tools for you to run your testing suite. Lab was inspired by a similar test tool called mocha, if you are familiar with it, and in fact was initially begun as a fork of the mocha codebase. But, as hapi's needs diverged from the original focus of mocha, lab was born. The assertion library commonly used in the hapi ecosystem is code. An assertion library forms the part of a test that performs the actual checks to judge whether a test case has passed or not, for example, checking that the value of a variable is true after an action has been taken. Lets look at our first test script; then, we can take a deeper look at lab and code, how they function under the hood, and some of the differences they have with other commonly used libraries, such as mocha and chai. Installing lab and code You can install lab and code the same as any other module on npm: npm install lab code -–save-dev Note the --save-dev flag added to the install command here. Remember your package.json file, which describes an npm module? This adds the modules to the devDependencies section of your npm module. These are dependencies that are required for the development and testing of a module but are not required for using the module. The reason why these are separated is that when we run npm install in an application codebase, it only installs the dependencies and devDependencies of package.json in that directory. For all the modules installed, only their dependencies are installed, not their development dependencies. This is because we only want to download the dependencies required to run that application; we don't need to download all the development dependencies for every module. The npm install command installs all the dependencies and devDependencies of package.json in the current working directory, and only the dependencies of the other installed module, not devDependencies. To install the development dependencies of a particular module, navigate to the root directory of the module and run npm install. After you have installed lab, you can then run it with the following: ./node_modules/lab/bin/lab test.js This is quite long to type every time, but fortunately due to a handy feature of npm called npm scripts, we can shorten it. If you look at package.json generated by npm init in the first chapter, depending on your version of npm, you may see the following (some code removed for brevity): ... "scripts": { "test": "echo "Error: no test specified" && exit 1" }, ... Scripts are a list of commands related to the project; they can be for testing purposes, as we will see in this example; to start an application; for build steps; and to start extra servers, among many other options. They offer huge flexibility in how these are combined to manage scripts related to a module or application, and I could spend a chapter, or even a book, on just these, but they are outside the scope of this book, so let's just focus on what is important to us here. To get a list of available scripts for a module application, in the module directory, simply run: $ npm run To then run the listed scripts, such as test you can just use: $ npm run test As you can see, this gives a very clean API for scripts and the documentation for each of them in the project's package.json. From this point on in this book, all code snippets will use npm scripts to test or run any examples. We should strive to use these in our projects to simplify and document commands related to applications and modules for ourselves and others. Let's now add the ability to run a test file to our package.json file. This just requires modifying the scripts section to be the following: ... "scripts": { "test": "./node_modules/lab/bin/lab ./test/index.js" }, ... It is common practice in node to place all tests in a project within the test directory. A handy addition to note here is that when calling a command with npm run, the bin directory of every module in your node_modules directory is added to PATH when running these scripts, so we can actually shorten this script to: … "scripts": { "test": "lab ./test/index.js" }, … This type of module install is considered to be local, as the dependency is local to the application directory it is being run in. While I believe this is how we should all install our modules, it is worth pointing it out that it is also possible to install a module globally. This means that when installing something like lab, it is immediately added to PATH and can be run from anywhere. We do this by adding a -g flag to the install, as follows: $ npm install lab code -g This may appear handier than having to add npm scripts or running commands locally outside of an npm script but should be avoided where possible. Often, installing globally requires sudo to run, meaning you are taking a script from the Internet and allowing it to have complete access to your system. Hopefully, the security concerns here are obvious. Other than that, different projects may use different versions of test runners, assertion libraries, or build tools, which can have unknown side effects and cause debugging headaches. The only time I would use globally installed modules are for command-line tools that I may use outside a particular project—for example, a node base terminal IDE such as slap (https://www.npmjs.com/package/slap) or a process manager such as PM2 (https://www.npmjs.com/package/pm2)—but never with sudo! Now that we are familiar with installing lab and code and the different ways or running it inside and outside of npm scripts, let's look at writing our first test script and take a more in-depth look at lab and code. Our First Test Script Let's take a look at what a simple test script in lab looks like using the code assertion library: const Code = require('code'); [1] const Lab = require('lab'); [1] const lab = exports.lab = Lab.script(); [2] lab.experiment('Testing example', () => { [3] lab.test('fails here', (done) => { [4] Code.expect(false).to.be.true(); [4] return done(); [4] }); [4] lab.test('passes here', (done) => { [4] Code.expect(true).to.be.true(); [4] return done(); [4] }); [4] }); This script, even though small, includes a number of new concepts, so let's go through it with reference to the numbers in the preceding code: [1]: Here, we just include the code and lab modules, as we would any other node module. [2]: As mentioned before, it is common convention to place all test files within the test directory of a project. However, there may be JavaScript files in there that aren't tests, and therefore should not be tested. To avoid this, we inform lab of which files are test scripts by calling Lab.script() and assigning the value to lab and exports.lab. [3]: The lab.experiment() function (aliased lab.describe()) is just a way to group tests neatly. In test output, tests will have the experiment string prefixed to the message, for example, "Testing example fails here". This is optional, however. [4]: These are the actual test cases. Here, we define the name of the test and pass a callback function with the parameter function done(). We see code in action here for managing our assertions. And finally, we call the done() function when finished with our test case. Things to note here: lab tests are always asynchronous. In every test, we have to call done() to finish the test; there is no counting of function parameters or checking whether synchronous functions have completed in order to ensure that a test is finished. Although this requires the boilerplate of calling the done() function at the end of every test, it means that all tests, synchronous or asynchronous, have a consistent structure. In Chai, which was originally used for hapi, some of the assertions such as .ok, .true, and .false use properties instead of functions for assertions, while assertions like .equal(), and .above() use functions. This type of inconsistency leads to us easily forgetting that an assertion should be a method call and hence omitting the (). This means that the assertion is never called and the test may pass as a false positive. Code's API is more consistent in that every assertion is a function call. Here is a comparison of the two: Chai: expect('hello').to.equal('hello'); expect(foo).to.exist; Code: expect('hello').to.equal('hello'); expect('foot').to.exist(); Notice the difference in the second exist() assertion. In Chai, you see the property form of the assertion, while in Code, you see the required function call. Through this, lab can make sure all assertions within a test case are called before done is complete, or it will fail the test. So let's try running our first test script. As we already updated our package.json script, we can run our test with the following command: $ npm run test This will generate the following output: There are a couple of things to note from this. Tests run are symbolized with a . or an X, depending on whether they pass or not. You can get a lab list of the full test title by adding the -v or -–verbose flag to our npm test script command. There are lots of flags to customize the running and output of lab, so I recommend using the full labels for each of these, for example, --verbose and --lint instead of -v and -l, in order to save you the time spent referring back to the documentation each time. You may have noticed the No global variable leaks detected message at the bottom. Lab assumes that the global object won't be polluted and checks that no extra properties have been added after running tests. Lab can be configured to not run this check or whitelist certain globals. Details of this are in the lab documentation availbale at https://github.com/hapijs/lab. Testing approaches This is one of the many known approaches to building a test suite, as is BDD (Behavior Driven Development), and like most test runners in node, lab is unopinionated about how you structure your tests. Details of how to structure your tests in a BDD can again be found easily in the lab documentation. Testing with hapi As I mentioned before, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies. Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called Shot, which simulates network requests to a hapi server. Let's take the example of a Hello World server and write a simple test for it: const Code = require('code'); const Lab = require('lab'); const Hapi = require('hapi'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Now that we are more familiar with with what a test script looks like, most of this will look familiar. However, you may have noticed we never started our hapi server. This means the server was never started and no port assigned, but thanks to the shot module (https://github.com/hapijs/shot), we can still make requests against it using the server.inject API. Not having to start a server means less setup and teardown before and after tests and means that a test suite can run quicker as less resources are required. server.inject can still be used if used with the same API whether the server has been started or not. Code coverage As I mentioned earlier in the article, having 100% code coverage is paramount in the hapi ecosystem and, in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task where we don't know how many tests are enough or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and this is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means that at the very least, every line of code has been considered and has at least one test covering it. I've found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall. Fortunately, lab has code coverage integrated, so we don't need to rely on an extra module to achieve this. It's as simple as adding the --coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate which lines are executed, thus producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight which lines are not covered by tests, so you know where to focus your testing effort, which is extremely useful in identifying where to focus your testing effort. It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the --threshold or -t flag followed by an integer. This is used for all the modules in the hapi ecosystem, and all thresholds are set to 100. Having a threshold of 100% for code coverage makes it much easier to manage changes to a codebase. When any update or pull request is submitted, the test suite is run against the changes, so we can know that all tests have passed and all code covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us, such as TravisCI (https://travis-ci.org/). It's also worth knowing that the coverage report can be displayed in a number of formats; For a full list of these reporters with explanations, I suggest reading the lab documentation available at https://github.com/hapijs/lab. Let's now look at what's involved in getting 100% coverage for our previous example. First of all, we'll move our server code to a separate file, which we will place in the lib folder and call index.js. It's worth noting here that it's good testing practice and also the typical module structure in the hapi ecosystem to place all module code in a folder called lib and the associated tests for each file within lib in a folder called test, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests and see examples of it in use. So, having separated our server from our tests, let's look at what our two files now look like; first, ./lib/index.js: const Hapi = require('hapi'); const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); module.exports = server; The main change here is that we export our server at the end for another file to acquire and start it if necessary. Our test file at ./test/index.js will now look like this: const Code = require('code'); const Lab = require('lab'); const server = require('../lib/index.js'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Finally, for us to test our code coverage, we update our npm test script to include the coverage flag --coverage or -c. The final example of this is in the second example of the source code of Chapter 4, Adding Tests and the Importance of 100% Coverage, which is supplied with this book. If you run this, you'll find that we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out what versions of hapi this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on node.js version 4.0.0. Will it work if run with hapi version 9 or 10? You can test this now by installing an older version with the help of the following command: $ npm install hapi@10 This will give you an idea of how easy it can be to check whether your codebase works with different versions of libraries. If you have some time, it would be interesting to see how this example runs on different versions of node (Hint: it breaks on any version earlier than 4.0.0). In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this fortunate when we increase the complexity of our codebase, and so will the complexity of our tests be, which is where knowledge of writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code. Linting Also built into lab is linting support. Linting enforces a code style that is adhered to, which can be specified through an .eslintrc or .jshintrc file. By default, lab will enforce the the hapi style guide rules. The idea of linting is that all code will have the same structure, making it much easier to spot bugs and keep code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables. To enable the lab linter, simply add the linter flag to the test command, which is --lint or -L. I generally stick with the default hapi style guide rules as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it's easy to customize the linting rules used; for this, I recommend referring to the lab documentation. Summary In this article, we covered testing in node and hapi and how testing and code coverage are paramount in the hapi ecosystem. We saw justification for their need in application development and where they can make us more productive developers. We also introduced the test runner and code assertion libraries lab and code in the ecosystem. We saw justification for their use and also how to use them to write simple tests and how to use the tools provided in lab and hapi to test hapi applications. We also learned about some of the extra features baked into lab, such as code coverage and linting. We looked at how to test the code coverage of an application and get it to 100% and how the hapi ecosystem applies the hapi styleguide to all modules using lab's linting integration. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article] An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article]
Read more
  • 0
  • 0
  • 13200
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-payment-processing-workflow
Packt
09 Feb 2016
4 min read
Save for later

Payment Processing Workflow

Packt
09 Feb 2016
4 min read
In this article by Ernest Bruce, author of the book, Apply Pay Essentials, the author talks about actors and operations in the payment processing workflow. After the user authorizes the payment request, the user app, the payment gateway, and the order processing web app team up to securely deliver the payment information to the issuing bank, to transfer the funds form the user's account to the acquiring bank, and to inform the user of the transaction status (approved or declined). (For more resources related to this topic, see here.) The payment processing workflow is made up of three phases: Preprocess phase: This is the phase where the app gets a charge token from the payment gateway, and it sends order information (including the charge token) to the order-processing server Process phase: This is the phase where the order-processing web app (running on your server) charges the user's card through the payment gateway, updates order and inventory data if the charge is successful, and sends the transaction status to the user app Postprocess phase: This is the phase where the user app informs the user about the status of the transaction and dismisses the payment sheet As, in general, the payment-gateway API does not run appropriately in the Simulator app, you must use an actual iOS device to test the payment-processing workflow in your development environment. In addition to this, you need either a separate computer to run the order-processing web app or a proxy server that intercepts network traffic from the device and redirects the appropriate requests to your development computer. For details, see the documentation for the example project. Actors and Operations in the Processing Workflow The payment processing workflow is the process by which the payment information that is generated by Apple Pay from the payment request and the information that the user entered in the payment sheet is transmitted to your payment gateway and the card's issuing bank to charge the card and make the payment's funds available in your acquiring bank. The workflow starts when the payment sheet calls the paymentAuthorizationViewController:didAuthorizePayment:completion: delegate method, providing the user app general order information (such as shipping and billing information) and a payment token containing encrypted-payment data. This diagram depicts the actors, operations, and data that are part of the payment processing workflow: Payment_processing_workflow These are the operations and data that are part of the workflow: payment authorized: This is when the payment sheet tells the app that the user authorized the payment payment token: This is when the app provides the payment token to the payment gateway, which returns a charge token order info and charge token: This is when the app sends information about the order and the charge token to the order processing web app charge card: This is when the web app charges the card through the payment gateway approved or declined: This is when the payment gateway tells the web app whether the payment is approved or declined transaction result and order metadata: This is when the web app provides the user app the result of the transaction and order information, such as the order number transaction result: This is when the app tells the payment sheet the result of the payment transaction: approved, or declined payment sheet done: This is when the payment sheet tells the app that the transaction is complete dismiss: This is when the app dismisses the payment sheet Summary You can also check out the following books on Apple: Mastering Apple Aperture, Thomas Fitzgerald by Packt Publishing Apple Motion 5 Cookbook, Nick Harauz, by Packt Publishing Resources for Article: Further resources on this subject: BatteryMonitor Application [article] Introducing Xcode Tools for iPhone Development [article] Network Development with Swift [article]
Read more
  • 0
  • 0
  • 13735

article-image-analyzing-data-packets
Packt
08 Feb 2016
7 min read
Save for later

Analyzing Data Packets

Packt
08 Feb 2016
7 min read
In this article by Samir Datt, the author of the book Learning Network Forensics, you will learn to get your hands dirty by actually capturing and analyzing network traffic. We will learn how to use different software tools to capture and analyze network traffic with real-world scenarios of accessing data over the Internet and the resultant network capture. The article will cover the following topics: Packet sniffing and analysis using NetworkMiner Case study – sniffing out an insider (For more resources related to this topic, see here.) Packet sniffing and analysis using NetworkMiner NetworkMiner is a passive network sniffing or network forensic tool. It is called a passive tool as it does not send out requests—it sits silently on the network, capturing every packet in the promiscuous mode. NetworkMiner is host-centric. This means that it will classify data based on hosts rather than packets, which is what most sniffers such as Wireshark do. The different steps to NetworkMiner usage are as follows: Download and install the NetworkMiner. Then, configure it. Capture the data in NetworkMiner. Finally, analyze the data. NetworkMiner is available for download at SourceForge: http://sourceforge.net/projects/networkminer/. Though NetworkMiner is not as well known as it should be, it's host-centric approach is refreshingly different and effective. Allowing the users to classify traffic based on the IP addresses and not packets helps us to zero in on activities related to the specific computers that are under suspicion or are being investigated. The NetworkMiner interface is shown in the following screenshot: To begin using NetworkMiner, we start by selecting a network adapter from the drop-down list. NetworkMiner places this adapter in the promiscuous mode. Clicking Start begins NetworkMiner on the task of packet collection. While NetworkMiner has the capability of collecting data packets across the network, its real strength comes in to play after the data has been collected. In most of the scenarios, it makes more sense to use Wireshark to capture packets and then use NetworkMiner to do the analysis on the .pcap file that is captured. As soon as data capturing begins, NetworkMiner swings into action by sorting the packets based on the host IP addresses. This is extremely useful since it allows us to identify traffic that is specific to a single IP on the network. Consider that we have a single suspect with a known IP on the network, then we can focus our investigative resources on just that single IP address. Some really great additional features include the ability to identify the media access control (MAC) address of the network interface card (NIC) in use and also the OS of the suspect system. In fact, the icon on the left-hand side of the IP address shows the OS icon, if detected, as shown in the following screenshot: As we can see in the preceding image, some of the devices that are connected to the network under investigation are Windows and BSD devices. The next tab is the Frames tab. The Frames tab view is similar to that of Wireshark and is perhaps one of the lesser used tabs in NetworkMiner, due to the fact that there are so many other richer options available, as shown in the following screenshot: It gives us inputs on the packet length, source and destination IP address, as well as time to live (TTL) of the packet. NetworkMiner has the ability to collate the packets and then reconstruct the constituent files for viewing by the investigator. These files are shown in the Files tab. Assuming that some files were copied/accessed over a network share, it would be possible to view the reconstructed file in the Files tab. The Files tab also depicts the SSL certificates used over a network. This can also be useful from an investigation perspective, as shown in the following screenshot: Similarly, if pictures have been viewed over the network, these are reconstructed in the Images tab. In fact, this can be quite useful especially, when scanned documents are a part of the network traffic. This may happen when the bad guys try to avoid detection from the keyword-based searching. The following is an image depicting the Images tab: The reconstructed graphics are usually depicted as thumbnails. Right-clicking the thumbnail allows us to open the graphic in a picture editor/viewer. DNS queries are also accessible via another tab, as shown in the following image: There are additional tabs available that are notable from the perspective of an investigation. One of these is the Credentials tab. This stores the information related to interactions involving the exchange of credentials with resources that require logons. It is not uncommon to find username and passwords for plain-text logons listed under this tab. One can also find user accounts for popular sites such as Gmail and Facebook. A screenshot of the Credentials tab is as follows: In a number of cases, it is possible to determine the username and passwords of certain websites. Another great feature in NetworkMiner is the ability to import a set of keywords that are to be used to search within packets in the captured .pcap file. This allows us to separate packets that contain our keywords of interest. A screenshot is as follows: Case study – tracking down an insider XYZ Corporation, a medium-sized Government contractor, found that it had begun to lose business to a tiny competitor that seemed to know exactly what the sales team at XYZ Corp was planning. The senior management suspected that an insider was leaking information to the competitor. A network forensic 007 was called in to investigate the problem. A preliminary information-gathering exercise was initiated and a list of keywords was compiled to help in identifying packets that contained information of interest. A list of possible suspects, who had access to the confidential information, was also compiled. The specific network segment relating to the department in question was put under network surveillance. Wireshark was deployed to capture all the network traffic. Additional storage was made available to store the .pcap files generated by Wireshark. The collected .pcap files were analyzed using NetworkMiner. The following screenshot depicts Wireshark capturing traffic: An in-depth analysis of network traffic produced the following findings: An image showing the registration certificate of the company that was competing with XYZ Corp, providing the names of the directors The address of the company in the registration certificate was the residential address of the sales manager of XYZ Corp E-mail communications using personal e-mail addresses between the directors of the competing company and the senior manager sales of XYZ Corp Further offline analysis showed that the sales manager's wife was related to the director of the competing company It was also seen that the sales manager was connecting to the office Wi-Fi network using his android phone The sales manager was noted to be accessing cloud storage using his phone and transferring important files and contact lists It was noted that the sales manager was also in close communication with a female employee in the accounts department and that the connection was intimate The information collected so far was very indicative of the sales manager's involvement with competitors. Based on the preceding network forensics exercise, it was recommended that a full-fledged digital forensic exercise should be initiated, including that of his assigned laptop and phone device. It was also recommended that sufficient corroborating evidence should be collected using log analysis, RAM analysis, and disk forensics to initiate legal/breach of trust action against the suspect(s). Summary In this article, we moved our skills up a notch. You learned how to analyze the captured packets to see what is happening on the network. We also studied how to see the traffic from the specific IP addresses as well as protocol-specific traffic. We also understood how to look for specific traffic based on keywords. Files, private credentials, and images have been examined to identify activities of interest. We have now become a lot better at investigating network activity. Resources for Article:   Further resources on this subject: Introduction to Mobile Forensics [article] Securing vCloud Using the vCloud Networking and Security App Firewall [article] Configuring the alerts [article]
Read more
  • 0
  • 0
  • 22483

article-image-programming-raspbian
Packt
05 Feb 2016
8 min read
Save for later

Programming on Raspbian

Packt
05 Feb 2016
8 min read
In this article by Andrew Dennis, the author of Raspberry Pi Computer Architecture Essentials, we will discuss about Assembly language and the assembler. (For more resources related to this topic, see here.) Assembly language The Raspberry Pi comes equipped with an ARM v7 quad core processor. Each processor has its own set of specific machine code that it understands; this machine code is represented in the binary format. The machine code is different for each processor architecture, so the Raspberry Pi's ARM processor machine code will not work on an IBM or Intel CPU. Short of writing out 32-bit long binary machine code instructions, the lowest level of programming language we can find ourselves using is Assembler language, also known as Assembly language. The computer architecture's Assembly language is usually a one-to-one mapping between itself and the underlying machine code. This is achieved through using a mnemonic. A combination of these mnemonic codes will result in an operation, such as addition or subtraction. A program written in the Assembly language is compiled into machine code by the Assembler program. This program passes through the code one or more times and generates an object file as part of this process. The Assembler in some cases will also perform a variety of optimizations on the code in its subsequent passes. Following this a program called the Linker that generates an executable file you can run on your computer. Two important terms you will come across when writing Assembly language are opcode and operand. The opcode is an instruction (such as add) and the operand is data (such as an integer value). Each opcode and operand is created through the combination of sets of 8 bits (1 byte). In this article, we will write a simple program in Assembly language in order to understand the basics. The subject of the ARM v7 Assembly language is covered in more detail by the University of Michigan that hosts a useful guide to the ARM v7 architecture in PDF format at https://web.eecs.umich.edu/~prabal/teaching/eecs373-f10/readings/ARMv7-M_ARM.pdf. You may be interested in reviewing this as a supplement to the topics covered here. So, what do the mnemonic codes that make up Assembly language look like before being converted to machine code? Let's take a look at an example and see. Here, we demonstrate how we can take register 0 of the CPU and assign a number to it; in this case, 10. MOV R0, #10 The Assembly code MOV is short hand for assigning a value. The register is an example of the processor's internal memory storage location and, of course, 10 is an integer value. You can read more on CPU registers at Wikipedia: https://en.wikipedia.org/wiki/Processor_register As you explore the language further, you will become familiar with these types of commands, as they are the building blocks of your program. How about looking at another example? What do you think this does? ADD R0, R1, R2 This simple program introduces us to another mnemonic, ADD. Here, we are taking the values of registers 1 and 2, adding and assigning them to register 0. Running commands like this on the Raspberry Pi is very simple; we can add them to a file assemble and link them ourselves. We shall now explore a short Assembly language program that incorporates these two commands, MOV and ADD. Let's start by creating a new directory under the pi user: mkdir /home/pi/assem_programs This will be the place we store our Assembly code. Navigate into this directory, for example: cd /home/pi/assem_programs Next, we need to create a new file to place our code in. You can choose any text editor you are comfortable with in order to write the program. We have used Vim in the following example: vim first_assem_prog.s To this file, add the following block of code. Make sure that you include the spacing as demonstrated next:     .global main .func main main: MOV R0, #0 MOV R1, #10 MOV R2, #20 ADD R0, R1, R2 BX LR So, what does this program do? The first line in the program defines a directive called main. The prefix of .global tells the Assembler that the name is global and thus available to the C runtime. A directive is a code executed by the Assembler at assembly time, rather than the processor. We could have called this directive anything, but we have gone with main to keep it consistent with our C program. Assembler, unlike C, does not require the program entry point to be called main. As you will see, we will use the GCC compiler/linker to build an executable for our program, so the format we are writing the Assembly language in mimics that of a C program in some areas. This is why you will see references to the C runtime mentioned when discussing Assembly in this article. Following this, we then define that main is a function. Here, we can see another directive .func is used to specify this. So, now that we have main available, we can denote where this function starts, which in our case is the third line. Contained in the function are three lines of code for adding values to the registers. From our earlier examples, these should be familiar. What we have done is assigned the value 0 to R0, 10 to R1, 20 to R2 and then added the values together and stored the result (30) in R0. Finally, we call BX LR to return the value of register 0 back to the operating system. As you can see, this program is very simple, but demonstrates how to add numbers and store the results. Save the file and exit your text editor. You should now be back at the command line. This leads us to the next step of assembling and linking in order to generate a file we can run. Assembling and linking Now we have a program we need to test it. This is a two-step process that involves assembling the code and then linking it, which we touched upon at the start of this article. When you come to explore the C language next, you will see that linking is also a component there as well; in fact, we use the same tool for both C and Assembly—the GCC compiler. Briefly, these two steps to generate a runnable program that can be summed up as: Assembling is the process of generating the machine code object file from the Assembly mnemonics Linking is the process of creating an executable from one or more object files The first command we will run called as (the GNU assembler) will take the code we wrote previously and create an object file as its output. Run the following command from inside of the folder where you created your program: as –o first_assem_prog.o first_assem_prog.s If it assembled correctly, you should see no output. Following this, we need to run the linker, which is invoked with the gcc command. There is also another linker available called ld. However, since we are writing our Assembly in a C-like manner, we will use the gcc tool. You will also need to run this command in the same directory that you ran as in: gcc –o first_assem_prog first_assem_prog.o GCC stands for the GNU Compiler Collection If everything is successful, you shouldn't see any output. We now have an executable file we can run from the Linux command line. To do this, you can simply type: ./first_assem_prog You'll notice there is no output, however. So, how do we know whether the program executed correctly? We can use the Linux echo command, as follows: echo $? This displays the exit code of the previous process, which in our case is the result of program we just ran. You may remember that we wrote this value back using the BX LR code. As our program simply returned a value of 30 to register 0, this is the result we can see when using the echo command. You can try changing the values in your program and assembling and linking once more. The result you see when running echo should reflect your changes. Try changing the program to use R1 instead of R0 in the add function and see what happens. So, in a few easy steps, you have created an Assembly language program and learned how to assemble, link, and run it. Summary In this article, we explored the programming languages we will be using in this book. This included Assembler and C/C++. Resources for Article: Further resources on this subject: Raspberry Pi and 1-Wire [article] Raspberry Pi Gaming Operating Systems [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 2640

article-image-working-ceph-block-device
Packt
05 Feb 2016
29 min read
Save for later

Working with Ceph Block Device

Packt
05 Feb 2016
29 min read
In this article by Karan Singh, the author of the book Ceph Cookbook, we will see how storage space or capacity are assigned to physical or virtual servers in detail. We'll also cover the various storage formats supported by Ceph. In this article, we will cover the following recipes: Working with the RADOS Block Device Configuring the Ceph client Creating RADOS Block Device Mapping RADOS Block Device Ceph RBD Resizing Working with RBD snapshots Working with RBD clones A quick look at OpenStack Ceph – the best match for OpenStack Configuring OpenStack as Ceph clients Configuring Glance for the Ceph backend Configuring Cinder for the Ceph backend Configuring Nova to attach the Ceph RBD Configuring Nova to boot the instance from the Ceph RBD (For more resources related to this topic, see here.) Once you have installed and configured your Ceph storage cluster, the next task is performing storage provisioning. Storage provisioning is the process of assigning storage space or capacity to physical or virtual servers, either in the form of blocks, files, or object storages. A typical computer system or server comes with a limited local storage capacity that might not be enough for your data storage needs. Storage solutions such as Ceph provide virtually unlimited storage capacity to these servers, making them capable of storing all your data and making sure that you do not run out of space. Using a dedicated storage system instead of local storage gives you the much needed flexibility in terms of scalability, reliability, and performance. Ceph can provision storage capacity in a unified way, which includes block, filesystem, and object storage. The following diagram shows storage formats supported by Ceph, and depending on your use case, you can select one or more storage options: We will discuss each of these options in detail in this article, and we will focus mainly on Ceph block storage. Working with the RADOS Block Device The RADOS Block Device (RBD), which is now known as the Ceph Block Device, provides reliable, distributed, and high performance block storage disks to clients. A RADOS block device makes use of the librbd library and stores a block of data in sequential form striped over multiple OSDs in a Ceph cluster. RBD is backed by the RADOS layer of Ceph, thus every block device is spread over multiple Ceph nodes, delivering high performance and excellent reliability. RBD has native support for Linux kernel, which means that RBD drivers are well integrated with the Linux kernel since the past few years. In addition to reliability and performance, RBD also provides enterprise features such as full and incremental snapshots, thin provisioning, copy on write cloning, dynamic resizing, and so on. RBD also supports In-Memory caching, which drastically improves its performance. The industry leading open source hypervisors, such as KVM and Zen, provide full support to RBD and leverage its features to their guest virtual machines. Other proprietary hypervisors, such as VMware and Microsoft HyperV will be supported very soon. There has been a lot of work going on in the community for support to these hypervisors. The Ceph block device provides full support to cloud platforms such as OpenStack, Cloud stack, as well as others. It has been proven successful and feature-rich for these cloud platforms. In OpenStack, you can use the Ceph block device with cinder (block) and glance (imaging) components. Doing so, you can spin 1000s of Virtual Machines (VMs) in very little time, taking advantage of the copy on write feature of the Ceph block storage. All these features make RBD an ideal candidate for cloud platforms such as OpenStack and CloudStack. We will now learn how to create a Ceph block device and make use of it. Configuring the Ceph client Any regular Linux host (RHEL- or Debian-based) can act as a Ceph client. The Client interacts with the Ceph storage cluster over the network to store or retrieve user data. Ceph RBD support has been added to the Linux mainline kernel, starting with 2.6.34 and later versions. How to do it As we have done earlier, we will set up a Ceph client machine using vagrant and VirtualBox. We will use the Vagrantfile. Vagrant will then launch an Ubuntu 14.04 virtual machine that we will configure as a Ceph client: From the directory where we have cloned ceph-cookbook git repository, launch the client virtual machine using Vagrant: $ vagrant status client-node1$ vagrant up client-node1 Log in to client-node1: $ vagrant ssh client-node1 Note: The username and password that vagrant uses to configure virtual machines is vagrant, and vagrant has sudo rights. The default password for root user is vagrant. Check OS and kernel release (this is optional): $ lsb_release -a$ uname -r Check for RBD support in the kernel: $ sudo modprobe rbd Allow the ceph-node1 monitor machine to access client-node1 over ssh. To do this, copy root ssh keys from the ceph-node1 to client-node1 vagrant user. Execute the following commands from the ceph-node1 machine until otherwise specified: ## Login to ceph-node1 machine $ vagrant ssh ceph-node1 $ sudo su - # ssh-copy-id vagrant@client-node1 Provide a one-time vagrant user password, that is, vagrant, for client-node1. Once the ssh keys are copied from ceph-node1 to client-node1, you should able to log in to client-node1 without a password. Use the ceph-deploy utility from ceph-node1 to install Ceph binaries on client-node1: # cd /etc/ceph # ceph-deploy --username vagrant install client-node1 Copy the Ceph configuration file (ceph.conf) to client-node1: # ceph-deploy --username vagrant config push client-node1 The client machine will require Ceph keys to access the Ceph cluster. Ceph creates a default user, client.admin, which has full access to the Ceph cluster. It's not recommended to share client.admin keys with client nodes. The better approach is to create a new Ceph user with separate keys and allow access to specific Ceph pools: In our case, we will create a Ceph user, client.rbd, with access to the rbd pool. By default, Ceph block devices are created on the rbd pool: # ceph auth get-or-create client.rbd mon 'allow r' osd 'allow   class-read object_prefix rbd_children, allow rwx pool=rbd'  Add the key to the client-node1 machine for the client.rbd user: # ceph auth get-or-create client.rbd | ssh vagrant@client-node1 sudo tee /etc/ceph/ceph.client.rbd.keyring By this step, client-node1 should be ready to act as a Ceph client. Check the cluster status from the client-node1 machine by providing the username and secret key: $ vagrant ssh client-node1 $ sudo su - # cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring### Since we are not using the default user client.admin we need to supply username that will connect to Ceph cluster.# ceph -s --name client.rbd Creating RADOS Block Device Till now, we have configured Ceph client, and now we will demonstrate creating a Ceph block device from the client-node1 machine. How to do it Create a RADOS Block Device named rbd1 of size 10240 MB: # rbd create rbd1 --size 10240 --name client.rbd There are multiple options that you can use to list RBD images: ## The default pool to store block device images is 'rbd', you can also specify the pool name with the rbd command using the -p option: # rbd ls --name client.rbd # rbd ls -p rbd --name client.rbd # rbd list --name client.rbd Check the details of the rbd image: # rbd --image rbd1 info --name client.rbd Mapping RADOS Block Device Now that we have created block device on Ceph cluster, in order to use this block device, we need to map it to the client machine. To do this, execute the following commands from the client-node1 machine. How to do it Map the block device to the client-node1: # rbd map --image rbd1 --name client.rbd Check the mapped block device: # rbd showmapped --name client.rbd To make use of this block device, we should create a filesystem on this and mount it: # fdisk -l /dev/rbd1 # mkfs.xfs /dev/rbd1 # mkdir /mnt/ceph-disk1 # mount /dev/rbd1 /mnt/ceph-disk1 # df -h /mnt/ceph-disk1 Test the block device by writing data to it: # dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M To map the block device across reboot, you should add the init-rbdmap script to the system startup, add the Ceph user and keyring details to /etc/ceph/rbdmap, and finally, update the /etc/fstab file: # wget https://raw.githubusercontent.com/ksingh7/   ceph-cookbook/master/rbdmap -O /etc/init.d/rbdmap # chmod +x /etc/init.d/rbdmap # update-rc.d rbdmap defaults ## Make sure you use correct keyring value in /etc/ceph/rbdmap   file, which is generally unique for an environment. # echo "rbd/rbd1 id=rbd,   keyring=AQCLEg5VeAbGARAAE4ULXC7M5Fwd3BGFDiHRTw==" >>     /etc/ceph/rbdmap # echo "/dev/rbd1 /mnt/ceph-disk1 xfs defaults, _netdev  0 0 " >> /etc/fstab # mkdir /mnt/ceph-disk1 # /etc/init.d/rbdmap start Ceph RBD Resizing Ceph supports thin provisioned block devices, which means that the physical storage space will not get occupied until you begin storing data on the block device. The Ceph RADOS block device is very flexible; you can increase or decrease the size of an RBD on the fly from the Ceph storage end. However, the underlying filesystem should support resizing. Advance filesystems such as XFS, Btrfs, EXT, ZFS, and others support filesystem resizing to a certain extent. Please follow filesystem specific documentation to know more on resizing. How to do it To increase or decrease Ceph RBD image size, use the --size <New_Size_in_MB> option with the rbd resize command, this will set the new size for the RBD image: The original size of the RBD image that we created earlier was 10 GB. We will now increase its size to 20 GB: # rbd resize --image rbd1 --size 20480 --name client.rbd # rbd info --image rbd1 --name client.rbd Grow the filesystem so that we can make use of increased storage space. It's worth knowing that the filesystem resize is a feature of the OS as well as the device filesystem. You should read filesystem documentation before resizing any partition. The XFS filesystem supports online resizing. Check system message to know the filesystem size change: # dmesg | grep -i capacity # xfs_growfs -d /mnt/ceph-disk1 Working with RBD Snapshots Ceph extends full support to snapshots, which are point-in-time, read-only copies of an RBD image. You can preserve the state of a Ceph RBD image by creating snapshots and restoring the snapshot to get the original data. How to do it Let's see how a snapshot works with Ceph. To test the snapshot functionality of Ceph, let's create a file on the block device that we created earlier: # echo "Hello Ceph This is snapshot test" > /mnt/   ceph-disk1/snapshot_test_file Create a snapshot for the Ceph block device: Syntax: rbd snap create <pool-name>/<image-name>@<snap-name># rbd snap create rbd/rbd1@snapshot1 --name client.rbd To list snapshots of an image, use the following: Syntax: rbd snap ls <pool-name>/<image-name> # rbd snap ls rbd/rbd1 --name client.rbd To test the snapshot restore functionality of Ceph RBD, let's delete files from filesystem: # rm -f /mnt/ceph-disk1/* We will now restore the Ceph RBD snapshot to get back the files that deleted in the last step. Please note that a rollback operation will overwrite current the version of the RBD image and its data with the snapshot version. You should perform this operation carefully: Syntax: rbd snap rollback <pool-name>/<image-name>@<snap-name># rbd snap rollback rbd/rbd1@snapshot1 --name client.rbd Once the snapshot rollback operation is completed, remount the Ceph RBD filesystem to refresh the filesystem state. You should be able to get your deleted files back: # umount /mnt/ceph-disk1 # mount /dev/rbd1 /mnt/ceph-disk1 # ls -l /mnt/ceph-disk1 When you no longer need snapshots, you can remove a specific snapshot using the following syntax. Deleting the snapshot will not delete your current data on the Ceph RBD image: Syntax: rbd snap rm <pool-name>/<image-name>@<snap-name> # rbd snap rm rbd/rbd1@snapshot1 --name client.rbd If you have multiple snapshots of an RBD image, and you wish to delete all the snapshots with a single command, then use the purge sub command: Syntax: rbd snap purge <pool-name>/<image-name># rbd snap purge rbd/rbd1 --name client.rbd Working with RBD Clones Ceph supports a very nice feature for creating Copy-On-Write (COW) clones from RBD snapshots. This is also known as Snapshot Layering in Ceph. Layering allows clients to create multiple instant clones of Ceph RBD. This feature is extremely useful for cloud and virtualization platforms such as OpenStack, CloudStack, and Qemu/KVM, and so on. These platforms usually protect Ceph RBD images containing an OS / VM image in the form of a snapshot. Later, this snapshot is cloned multiple times to spawn new virtual machines / instances. Snapshots are read-only, but COW clones are fully writable; this feature of Ceph provides a greater level of flexibility and is extremely useful among cloud platforms: Every cloned image (child image) stores references of its parent snapshot to read image data. Hence, the parent snapshot should be protected before it can be used for cloning. At the time of data writing on the COW cloned image, it stores new data references to itself. COW cloned images are as good as RBD. They are quite flexible like RBD, which means that they are writable, resizable, and support snapshots and further cloning. In Ceph RBD, images are of two types: format-1 and format-2. The RBD snapshot feature is available on both types that is, in format-1 as well as in format-2 RBD images. However, the layering feature (the COW cloning feature) is available only for the RBD image with format-2. The default RBD image format is format-1. How to do it To demonstrate RBD cloning, we will intentionally create a format-2 RBD image, then create and protect its snapshot, and finally, create COW clones out of it: Create a format-2 RBD image and check its detail: # rbd create rbd2 --size 10240 --image-format 2 --name client.rbd # rbd info --image rbd2 --name client.rbd Create a snapshot of this RBD image: # rbd snap create rbd/rbd2@snapshot_for_cloning --name client.rbd To create a COW clone, protect the snapshot. This is an important step, we should protect the snapshot because if the snapshot gets deleted, all the attached COW clones will be destroyed: # rbd snap protect rbd/rbd2@snapshot_for_cloning --name client.rbd Next, we will create a cloned RBD image using this snapshot: Syntax: rbd clone <pool-name>/<parent-image>@<snap-name> <pool-name>/<child-image-name> # rbd clone rbd/rbd2@snapshot_for_cloning rbd/clone_rbd2 --name client.rbd Creating a clone is a quick process. Once it's completed, check new image information. You would notice that its parent pool, image, and snapshot information would be displayed: # rbd info rbd/clone_rbd2 --name client.rbd At this point, we have a cloned RBD image, which is dependent upon its parent image snapshot. To make the cloned RBD image independent of its parent, we need to flatten the image, which involves copying the data from the parent snapshot to the child image. The time it takes to complete the flattening process depends on the size of the data present in the parent snapshot. Once the flattening process is completed, there is no dependency between the cloned RBD image and its parent snapshot. To initiate the flattening process, use the following: # rbd flatten rbd/clone_rbd2 --name client.rbd # rbd info --image clone_rbd2 --name client.rbd After the completion of the flattening process, if you check image information, you will notice that the parent image/snapshot name is not present and the clone is independent. You can also remove the parent image snapshot if you no longer require it. Before removing the snapshot, you first have to unprotect it: # rbd snap unprotect rbd/rbd2@snapshot_for_cloning --name client.rbd Once the snapshot is unprotected, you can remove it: # rbd snap rm rbd/rbd2@snapshot_for_cloning --name client.rbd A quick look at OpenStack OpenStack is an open source software platform for building and managing public and private cloud infrastructure. It is being governed by an independent, non-profit foundation known as the OpenStack foundation. It has the largest and the most active community, which is backed by technology giants such as, HP, Red Hat, Dell, Cisco, IBM, Rackspace, and many more. OpenStack's idea for cloud is that it should be simple to implement and massively scalable. OpenStack is considered as the cloud operating system where users are allowed to instantly deploy hundreds of virtual machines in an automated way. It also provides an efficient way of hassle free management of these machines. OpenStack is known for its dynamic scale up, scale out, and distributed architecture capabilities, making your cloud environment robust and future-ready. OpenStack provides an enterprise class Infrastructure-as-a-service (IaaS) platform for all your cloud needs. As shown in the preceding diagram, OpenStack is made up of several different software components that work together to provide cloud services. Out of all these components, in this article, we will focus on Cinder and Glance, which provide block storage and image services respectively. For more information on OpenStack components, please visit http://www.openstack.org/. Ceph – the best match for OpenStack Since the last few years, OpenStack has been getting amazingly popular, as it's based on software defined on a wide range, whether it's computing, networking, or even storage. And when you talk storage for OpenStack, Ceph will get all the attraction. An OpenStack user survey, conducted in May 2015, showed Ceph dominating the block storage driver market with a whopping 44% production usage. Ceph provides a robust, reliable storage backend that OpenStack was looking for. Its seamless integration with OpenStack components such as cinder, glance, nova, and keystone provides all in one cloud storage backend for OpenStack. Here are some key benefits that make Ceph the best match for OpenStack: Ceph provides enterprise grade, feature rich storage backend at a very low cost per gigabyte, which helps to keep the OpenStack cloud deployment price down. Ceph is a unified storage solution for Block, File, or Object storage for OpenStack, allowing applications to use storage as they need. Ceph provides advance block storage capabilities for OpenStack clouds, which includes the easy and quick spawning of instances, as well as the backup and cloning of VMs. It provides default persistent volumes for OpenStack instances that can work like traditional servers, where data will not flush on rebooting the VMs. Ceph supports OpenStack in being host-independent by supporting VM migrations, scaling up storage components without affecting VMs. It provides the snapshot feature to OpenStack volumes, which can also be used as a means of backup. Ceph's copy-on-write cloning feature provides OpenStack to spin up several instances at once, which helps the provisioning mechanism function faster. Ceph supports rich APIs for both Swift and S3 Object storage interfaces. Ceph and OpenStack communities have been working closely since the last few years to make the integration more seamless, and to make use of new features as they are landed. In the future, we can expect that OpenStack and Ceph will be more closely associated due to Red Hat's acquisition of Inktank, the company behind Ceph; Red Hat is one of the major contributor of OpenStack project. OpenStack is a modular system, which is a system that has a unique component for a specific set of tasks. There are several components that require a reliable storage backend, such as Ceph, and extend full integration to it, as shown in the following diagram. Each of these components uses Ceph in their own way to store block devices and objects. The majority of cloud deployment based on OpenStack and Ceph use the Cinder, glance, and Swift integrations with Ceph. Keystone integration is used when you need an S3-compatible object storage on the Ceph backend. Nova integration allows boot from Ceph volume capabilities for your OpenStack instances. Setting up OpenStack The OpenStack setup and configuration is beyond the scope of this article; however, for ease of demonstration, we will use a virtual machine preinstalled with the OpenStack RDO Juno release. If you like, you can also use your own OpenStack environment and can perform Ceph integration. How to do it In this section, we will demonstrate setting up a preconfigured OpenStack environment using vagrant, and accessing it via CLI and GUI: Launch openstack-node1 using Vagrantfile. Make sure that you are on the host machine and are under the ceph-cookbook repository before bringing up openstack-node1 using vagrant: # cd ceph-cookbook # vagrant up openstack-node1 Once openstack-node1 is up, check the vagrant status and log in to the node: $ vagrant status openstack-node1 $ vagrant ssh openstack-node1 We assume that you have some knowledge of OpenStack and are aware of its operations. We will source the keystone_admin file, which has been placed under /root, and to do this, we need to switch to root: $ sudo su - $ source keystone_admin We will now run some native OpenStack commands to make sure that OpenStack is set up correctly. Please note that some of these commands do not show any information, since this is a fresh OpenStack environment and does not have instances or volumes created: # nova list # cinder list # glance image-list You can also log in to the OpenStack horizon web interface (https://192.168.1.111/dashboard) with the username as admin and password as vagrant. After logging in the Overview page opens: Configuring OpenStack as Ceph clients OpenStack nodes should be configured as Ceph clients in order to access the Ceph cluster. To do this, install Ceph packages on OpenStack nodes and make sure it can access the Ceph cluster. How to do it In this section, we are going to configure OpenStack as a Ceph client, which will be later used to configure cinder, glance, and nova: We will use ceph-node1 to install Ceph binaries on os-node1 using ceph-deploy. To do this, we should set up an ssh password-less login to os-node1. The root password is again the same (vagrant): $ vagrant ssh ceph-node1 $ sudo su - # ping os-node1 -c 1 # ssh-copy-id root@os-node1 Next, we will install Ceph packages to os-node1 using ceph-deploy: # cd /etc/ceph # ceph-deploy install os-node1 Push the Ceph configuration file, ceph.conf, from ceph-node1 to os-node1. This configuration file helps clients reach the Ceph monitor and OSD machines. Please note that you can also manually copy the ceph.conf file to os-node1 if you like: # ceph-deploy config push os-node1 Make sure that the ceph.conf file that we have pushed to os-node1 should have the permission of 644. Create Ceph pools for cinder, glance, and nova. You may use any available pool, but it's recommended that you create separate pools for OpenStack components: # ceph osd pool create images 128 # ceph osd pool create volumes 128 # ceph osd pool create vms 128 Set up client authentication by creating a new user for cinder and glance: # ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' Add the keyrings to os-node1 and change their ownership: # ceph auth get-or-create client.glance | ssh os-node1 sudo tee /etc/ceph/ceph.client.glance.keyring # ssh os-node1 sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring # ceph auth get-or-create client.cinder | ssh os-node1 sudo tee /etc/ceph/ceph.client.cinder.keyring # ssh os-node1 sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring The libvirt process requires accessing the Ceph cluster while attaching or detaching a block device from Cinder. We should create a temporary copy of the client.cinder key that will be needed for the cinder and nova configuration later in this article: # ceph auth get-key client.cinder | ssh os-node1 tee /etc/ceph/temp.client.cinder.key At this point, you can test the previous configuration by accessing the Ceph cluster from os-node1 using the client.glance and client.cinder Ceph users. Log in to os-node1 and run the following commands: $ vagrant ssh openstack-node1 $ sudo su - # cd /etc/ceph # ceph -s --name client.glance --keyring ceph.client.glance.keyring # ceph -s --name client.cinder --keyring ceph.client.cinder.keyring Finally, generate uuid, then create, define, and set the secret key to libvirt and remove temporary keys: Generate a uuid by using the following: # cd /etc/ceph # uuidgen Create a secret file and set this uuid number to it: cat > secret.xml <<EOF <secret ephemeral='no' private='no'>   <uuid>bb90381e-a4c5-4db7-b410-3154c4af486e</uuid>   <usage type='ceph'>     <name>client.cinder secret</name>   </usage> </secret> EOF Make sure that you use your own uuid generated for your environment./ Define the secret and keep the generated secret value safe. We would require this secret value in the next steps: # virsh secret-define --file secret.xml Set the secret value that was generated in the last step to virsh and delete temporary files. Deleting the temporary files is optional; it's done just to keep the system clean: # virsh secret-set-value --secret bb90381e-a4c5-4db7-b410-3154c4af486e --base64 $(cat temp.client.cinder.key) && rm temp.client.cinder.key secret.xml # virsh secret-list Configuring Glance for the Ceph backend We have completed the configuration required from the Ceph side. In this section, we will configure the OpenStack glance to use Ceph as a storage backend. How to do it This section talks about configuring the glance component of OpenStack to store virtual machine images on Ceph RBD: Log in to os-node1, which is our glance node, and edit /etc/glance/glance-api.conf for the following changes: Under the [DEFAULT] section, make sure that the following lines are present: default_store=rbd show_image_direct_url=True Execute the following command to verify entries: # cat /etc/glance/glance-api.conf | egrep -i "default_store|image_direct" Under the [glance_store] section, make sure that the following lines are present under RBD Store Options: stores = rbd rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=glance rbd_store_pool=images rbd_store_chunk_size=8 Execute the following command to verify the previous entries: # cat /etc/glance/glance-api.conf | egrep -v "#|default" | grep -i rbd Restart the OpenStack glance services: # service openstack-glance-api restart Source the keystone_admin file for OpenStack and list the glance images: # source /root/keystonerc_admin # glance image-list Download the cirros image from the Internet, which will later be stored in Ceph: # wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img Add a new glance image using the following command: # glance image-create --name cirros_image --is-public=true --disk-format=qcow2 --container-format=bare < cirros-0.3.1-x86_64-disk.img List the glance images using the following command; you will notice there are now two glance images: # glance image-list You can verify that the new image is stored in Ceph by querying the image ID in the Ceph images pool: # rados -p images ls --name client.glance --keyring /etc/ceph/ceph.client.glance.keyring | grep -i id Since we have configured glance to use Ceph for its default storage, all the glance images will now be stored in Ceph. You can also try creating images from the OpenStack horizon dashboard: Finally, we will try to launch an instance using the image that we have created earlier: # nova boot --flavor 1 --image b2d15e34-7712-4f1d-b48d-48b924e79b0c vm1 While you are adding new glance images or creating an instance from the glance image stored on Ceph, you can check the IO on the Ceph cluster by monitoring it using the # watch ceph -s command. Configuring Cinder for the Ceph backend The Cinder program of OpenStack provides block storage to virtual machines. In this section, we will configure OpenStack Cinder to use Ceph as a storage backend. OpenStack Cinder requires a driver to interact with the Ceph block device. On the OpenStack node, edit the /etc/cinder/cinder.conf configuration file by adding the code snippet given in the following section. How to do it In the last section, we learned to configure glance to use Ceph. In this section, we will learn to use the Ceph RBD with the Cinder service of OpenStack: Since in this demonstration we are not using multiple backend cinder configurations, comment the enabled_backends option from the /etc/cinder/cinder.conf file: Navigate to the Options defined in cinder.volume.drivers.rbd section of the /etc/cinder/cinder.conf file and add the following.(replace the secret uuid with your environments value): volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_user = cinder rbd_secret_uuid = bb90381e-a4c5-4db7-b410-3154c4af486e rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 Execute the following command to verify the previous entries: # cat /etc/cinder/cinder.conf | egrep "rbd|rados|version" | grep -v "#" Restart the OpenStack cinder services: # service openstack-cinder-volume restart Source the keystone_admin files for OpenStack: # source /root/keystonerc_admin # cinder list To test this configuration, create your first cinder volume of 2 GB, which should now be created on your Ceph cluster: # cinder create --display-name ceph-volume01 --display-description "Cinder volume on CEPH storage" 2 Check the volume by listing the cinder and Ceph volumes pool: # cinder list # rados -p volumes --name client.cinder --keyring ceph.client.cinder.keyring ls | grep -i id Similarly, try creating another volume using the OpenStack Horizon dashboard. Configuring Nova to attach the Ceph RBD In order to attach the Ceph RBD to OpenStack instances, we should configure the nova component of OpenStack by adding the rbd user and uuid information that it needs to connect to the Ceph cluster. To do this, we need to edit /etc/nova/nova.conf on the OpenStack node and perform the steps that are given in the following section. How to do it The cinder service that we configured in the last section creates volumes on Ceph, however, to attach these volumes to OpenStack instances, we need to configure NOVA: Navigate to the Options defined in nova.virt.libvirt.volume section and add the following lines of code (replace the secret uuid with your environments value): rbd_user=cinder rbd_secret_uuid= bb90381e-a4c5-4db7-b410-3154c4af486e Restart the OpenStack nova services: # service openstack-nova-compute restart To test this configuration, we will attach the cinder volume to an OpenStack instance. List the instance and volumes to get the ID: # nova list # cinder list Attach the volume to the instance: # nova volume-attach 1cadffc0-58b0-43fd-acc4-33764a02a0a6 1337c866-6ff7-4a56-bfe5-b0b80abcb281 # cinder list You can now use this volume as a regular block disk from your OpenStack instance: Configuring Nova to boot the instance from the Ceph RBD In order to boot all OpenStack instances into Ceph, that is, for the boot-from-volume feature, we should configure an ephemeral backend for nova. To do this, edit /etc/nova/nova.conf on the OpenStack node and perform the changes shown next. How to do it This section deals with configuring NOVA to store entire virtual machine on the Ceph RBD: Navigate to the [libvirt] section and add the following: inject_partition=-2 images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf Verify your changes: # cat /etc/nova/nova.conf|egrep "rbd|partition" | grep -v "#" Restart the OpenStack nova services: # service openstack-nova-compute restart To boot a virtual machine in Ceph, the glance image format must be RAW. We will use the same cirros image that we downloaded earlier in this article and convert this image from the QCOW to RAW format (this is important). You can also use any other image, as long as it's in the RAW format: # qemu-img convert -f qcow2 -O raw cirros-0.3.1-x86_64-disk.img cirros-0.3.1-x86_64-disk.raw Create a glance image using a RAW image: # glance image-create --name cirros_raw_image --is-public=true --disk-format=raw --container-format=bare < cirros-0.3.1-x86_64-disk.raw To test the boot from the Ceph volume feature, create a bootable volume: # nova image-list # cinder create --image-id ff8d9729-5505-4d2a-94ad-7154c6085c97 --display-name cirros-ceph-boot-volume 1 List cinder volumes to check if the bootable field is true: # cinder list Now, we have a bootable volume, which is stored on Ceph, so let's launch an instance with this volume: # nova boot --flavor 1 --block_device_mapping vda=fd56314b-e19b-4129-af77-e6adf229c536::0 --image 964bd077-7b43-46eb-8fe1-cd979a3370df vm2_on_ceph --block_device_mapping vda = <cinder bootable volume id >--image = <Glance image associated with the bootable volume> Finally, check the instance status: # nova list At this point, we have an instance running from a Ceph volume. Try to log in to the instance from the horizon dashboard: Summary In this article, we have covered the various storage formats supported by Ceph in detail and how they were assigned to other physical or virtual servers. Resources for Article:   Further resources on this subject: Ceph Instant Deployment [article] GNU Octave: Data Analysis Examples [article] Interacting with GNU Octave: Operators [article]
Read more
  • 0
  • 0
  • 25744
article-image-classes-and-instances-ember-object-model
Packt
05 Feb 2016
12 min read
Save for later

Classes and Instances of Ember Object Model

Packt
05 Feb 2016
12 min read
In this article by Erik Hanchett, author of the book Ember.js cookbook, Ember.js is an open source JavaScript framework that will make you more productive. It uses common idioms and practices, making it simple to create amazing single-page applications. It also let's you create code in a modular way using the latest JavaScript features. Not only that, it also has a great set of APIs in order to get any task done. The Ember.js community is welcoming newcomers and is ready to help you when required. (For more resources related to this topic, see here.) Working with classes and instances Creating and extending classes is a major feature of the Ember object model. In this recipe, we'll take a look at how creating and extending objects works. How to do it Let's begin by creating a very simple Ember class using extend(), as follows: const Light = Ember.Object.extend({ isOn: false }); This defines a new Light class with a isOn property. Light inherits the properties and behavior from the Ember object such as initializers, mixins, and computed properties. Ember Twiddle Tip At some point of time, you might need to test out small snippets of the Ember code. An easy way to do this is to use a website called Ember Twiddle. From that website, you can create an Ember application and run it in the browser as if you were using Ember CLI. You can even save and share it. It has similar tools like JSFiddle; however, only for Ember. Check it out at http://ember-twiddle.com. Once you have defined a class you'll need to be able to create an instance of it. You can do this by using the create() method. We'll go ahead and create an instance of Light. constbulb = Light.create(); Accessing properties within the bulb instance We can access the properties of the bulb object using the set and get accessor methods. Let's go ahead and get the isOn property of the Light class, as follows: console.log(bulb.get('isOn')); The preceding code will get the isOn property from the bulb instance. To change the isOn property, we can use the set accessor method: bulb.set('isOn', true) The isOn property will now be set to true instead of false. Initializing the Ember object The init method is invoked whenever a new instance is created. This is a great place to put in any code that you may need for the new instance. In our example, we'll go ahead and add an alert message that displays the default setting for the isOn property: const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); As soon as the Light.create line of code is executed, the instance will be created and this message will pop up on the screen. The isON property is defaulted to false. Subclass Be aware that you can create subclasses of your objects in Ember. You can override methods and access the parent class by using the _super() keyword method. This is done by creating a new object that uses the Ember extend method on the parent class. Another important thing to realize is that if you're subclassing a framework class such as Ember.Component and you override the init method, you'll need to make sure that you call this._super(). If not, the component may not work properly. Reopening classes At anytime, you can reopen a class and define new properties or methods in it. For this, use the reopen method. In our previous example, we had an isON property. Let's reopen the same class and add a color property, as follows: To add the color property, we need to use the reopen() method: Light.reopen({ color: 'yellow' }); If required, you can add static methods or properties using reopenClass, as follows: Light.reopen({ wattage: 40 }); You can now access the static property: Light.wattage How it works In the preceding examples, we have created an Ember object using extend. This tells Ember to create a new Ember class. The extend method uses inheritance in the Ember.js framework. The Light object inherits all the methods and bindings of the Ember object. The create method also inherits from the Ember object class and returns a new instance of this class. The bulb object is the new instance of the Ember object that we created. There's more To use the previous examples, we can create our own module and have it imported to our project. To do this, create a new MyObject.js file in the app folder, as follows: // app/myObject.js import Ember from 'ember'; export default function() { const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); Light.reopen({ color: 'yellow' }); Light.reopenClass({ wattage: 80 }); const bulb = Light.create(); console.log(bulb.get('color')); console.log(Light.wattage); } This is the module that we can now import to any file of our Ember application. In the app folder, edit the app.js file. You'll need to add the following line at the top of the file: // app/app.js import myObject from './myObject'; At the bottom, before the export, add the following line: myObject(); This will execute the myObject function that we created in the myObject.js file. After running the Ember server, you'll see the isOn property defaulted to the false pop-up message. Working with computed properties In this recipe, we'll take a look at the computed properties and how they can be used to display data, even if that data changes as the application is running. How to do it Let's create a new Ember.Object and add a computed property to it, as shown in the following: Begin by creating a new description computed property. This property will reflect the status of isOn and color properties: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }) }); We can now create a new Light object and get the computed property description: const bulb = Light.create(); bulb.get('description'); //The yellow light is set to false The preceding example creates a computed property that depends on the isOn and color properties. When the description function is called, it returns a string describing the state of the light. Computed properties will observe changes and dynamically update whenever they occur. To see this in action, we can change the preceding example and set the isOn property to false. Use the following code to accomplish this: bulb.set('isOn', true); bulb.get('description') //The yellow light is set to true The description has been automatically updated and will now display that the yellow light is set to true. Chaining the Light object Ember provides a nice feature that allows computed properties to be present in other computed properties. In the previous example, we created a description property that outputted some basic information about the Light object, as follows: Let's add another property that gives a full description: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), }); The fullDescription function returns a string that concatenates the output from description with a new string that displays the age: const bulb = Light.create({age:22}); bulb.get('fullDescription'); //The yellow light is set to false and the age is 22 In this example, during instantiation of the Light object, we set the age to 22. We can overwrite any property if required. Alias The Ember.computed.alias method allows us to create a property that is an alias for another property or object. Any call to get or set will behave as if the changes were made to the original property, as shown in the following: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), aliasDescription: Ember.computed.alias('fullDescription') }); const bulb = Light.create({age: 22}); bulb.get('aliasDescription'); //The yellow light is set to false and the age is 22. The aliasDescription will display the same text as fullDescription since it's just an alias of this object. If we made any changes later to any properties in the Light object, the alias would also observe these changes and be computed properly. How it works Computed properties are built on top of the observer pattern. Whenever an observation shows a state change, it recomputes the output. If no changes occur, then the result is cached. In other words, the computed properties are functions that get updated whenever any of their dependent value changes. You can use it in the same way that you would use a static property. They are common and useful throughout Ember and it's codebase. Keep in mind that a computed property will only update if it is in a template or function that is being used. If the function or template is not being called, then nothing will occur. This will help with performance. Working with Ember observers in Ember.js Observers are fundamental to the Ember object model. In the next recipe, we'll take our light example and add in an observer and see how it operates. How to do it To begin, we'll add a new isOnChanged observer. This will only trigger when the isOn property changes: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn') }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), desc: Ember.computed.alias('description'), isOnChanged: Ember.observer('isOn',function() { console.log('isOn value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); //console logs isOn value changed The Ember.observer isOnChanged monitors the isOn property. If any changes occur to this property, isOnChanged is invoked. Computed Properties vs Observers At first glance, it might seem that observers are the same as computed properties. In fact, they are very different. Computed properties can use the get and set methods and can be used in templates. Observers, on the other hand, just monitor property changes. They can neither be used in templates nor be accessed like properties. They also don't return any values. With that said, be careful not to overuse observers. For many instances, a computed property is the most appropriate solution. Also, if required, you can add multiple properties to the observer. Just use the following command: Light.reopen({ isAnythingChanged: Ember.observer('isOn','color',function() { console.log('isOn or color value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); // console logs isOn or color value changed bulb.set('color','blue'); // console logs isOn or color value changed The isAnything observer is invoked whenever the isOn or color properties changes. The observer will fire twice as each property has changed. Synchronous issues with the Light object and observers It's very easy to get observers out of sync. For example, if a property that it observes changes, it will be invoked as expected. After being invoked, it might manipulate a property that hasn't been updated yet. This can cause synchronization issues as everything happens at the same time, as follows: The following example shows this behavior: Light.reopen({ checkIsOn: Ember.observer('isOn', function() { console.log(this.get('fullDescription')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); When isOn is changed, it's not clear whether fullDescription, a computed property, has been updated yet or not. As observers work synchronously, it's difficult to tell what has been fired and changed. This can lead to unexpected behavior. To counter this, it's best to use the Ember.run.once method. This method is a part of the Ember run loop, which is Ember's way of managing how the code is executed. Reopen the Light object and you can see the following occurring: Light.reopen({ checkIsOn: Ember.observer('isOn','color', function() { Ember.run.once(this,'checkChanged'); }), checkChanged: Ember.observer('description',function() { console.log(this.get('description')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); bulb.set('color', 'blue'); The checkIsOn observer calls the checkChanged observer using Ember.run.once. This method is only run once per run loop. Normally, checkChanged would be fired twice; however, since it's be called using Ember.run.once, it only outputs once. How it works Ember observers are mixins from the Ember.Observable class. They work by monitoring property changes. When any change occurs, they are triggered. Keep in mind that these are not the same as computed properties and cannot be used in templates or with getters or setters. Summary In this article you learned classes and instances. You also learned computed properties and how they can be used to display data. Resources for Article: Further resources on this subject: Introducing the Ember.JS framework [article] Building Reusable Components [article] Using JavaScript with HTML [article]
Read more
  • 0
  • 0
  • 14632

article-image-implementing-openstack-networking-and-security
Packt
05 Feb 2016
8 min read
Save for later

Implementing OpenStack Networking and Security

Packt
05 Feb 2016
8 min read
In this article written by Omar Khedher, author of Mastering OpenStack, we will explore the various aspects of networking and security in OpenStack. A major part of the article is focused on presenting the different security layouts by using Neutron. In this article, we will discuss the following topics: Understanding how Neutron facilitates the network management in OpenStack Using security groups to enforce a security layer for instances The story of an API By analogy, the OpenStack compute service provides an API that provides a virtual server abstraction to imitate the compute resources. The network service and compute service perform in the same way, where we come to a new generation of virtualization in network resources such as network, subnet, and ports, and can be continued in the following schema: Network: As an abstraction for the layer 2 network segmentation that is similar to the VLANs Subnet: This is the associated abstraction layer for a block of IPv4/IPv6 addressing per network Port: This is the associated abstraction layer that is used to attach a virtual NIC of an instance to a network Router: This is an abstraction for layer 3 that is used to perform routing between the networks Floating IP: This is used to perform static public IP mapping from external to internal networks Security groups Imagine a scenario where you have to apply certain traffic management rules for a dozen compute node instances. Therefore, assigning a certain set of rules for a specific group of nodes will be much easier instead of going through each node at a time. Security groups enclose all the aspects of the rules that are applied to the ingoing and outgoing traffic to instances, which includes the following: The source and receiver, which will allow or deny traffic to instances from either the internal OpenStack IP addresses or from the rest of the world Protocols to which the rule will apply, such as TCP, UDP, and ICMP Egress/ingress traffic management to a neutron port In this way, OpenStack offers an additional security layer to the firewall rules that are available on the compute instance. The purpose is to manage traffic to several compute instances from one security group. You should bear in mind that the networking security groups are more granular-traffic-filtering-aware than the compute firewall rules since they are applied on the basis of the port instead of the instance. Eventually, the creation of the network security rules can be done in different ways. For more information on how iptables works on Linux, https://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-iptables.html is a very useful reference. Manage the security groups using Horizon From Horizon, in the Access and Security section, you can add a security group and name it, for example PacktPub_SG. Then, a simple click on Edit Rules will do the trick. The following example illustrates how this network security function can help you understand how traffic—both in ingress/egress—can be controlled: The previous security group contains four rules. The first and the second lines are rules to open all the outgoing traffic for IPv4 and IPv6 respectively. The third line allows the inbound traffic by opening the ICMP port, while the last one opens port 22 for SSH for the inbound interface. You might notice the presence of the CIDR fields, which is essential to know. Based on CIDR, you allow or restrict traffic over the specified port. For example, using CIDR of 0.0.0.0/0 will allow traffic for all the IP addresses over the port that was mentioned in your rule. For example, a CIDR with 32.32.15.5/32 will restrict traffic only to a single host with an IP of 32.32.15.5. If you would like to specify a range of IP in the same subnet, you can use the CIDR notation, 32.32.15.1/24, which will restrict traffic to the IP addresses starting from 32.32.15.*; the other IP addresses will not stick to the latter rule. The naming of the security group must be done with a unique name per project. Manage the security groups using the Neutron CLI The security groups also can be managed by using the Python Neutron command-line interface. Wherever you run the Neutron daemon, you can list, for example, all the present security groups from the command line in the following way: # neutron security-group-list The preceding command yields the following output: To demonstrate how the PacktPub_SG security group rules that were illustrated previously are implemented on the host, we can add a new rule that allows the ingress connections to ping (ICMP) and establish a secure shell connection (SSH) in the following way: # neutron security-group-rule-create --protocol icmp –-direction ingress PacktPub-SG The preceding command produces the following result: The following command line will add a new rule that allows ingress connections to establish a secure shell connection (SSH): # neutron security-group-rule-create --protocol tcp –-port-range-max 22 –-direction ingress PacktPub-SG The preceding command gives the following output: By default, if none of the security groups have been created, the port of instances will be associated within the default security group for any project where all the outbound traffic will be allowed and blocked in the inbound side. You may conclude from the output of the previous command line that it lists the rules that are associated with the current project ID and not by the security groups. Managing the security groups using the Nova CLI The nova command line also does the same trick if you intend to perform the basic security group's control, as follows: $ nova secgroup-list-rules default Since we are setting Neutron as our network service controller, we will proceed by using the networking security groups, which reveals additional traffic control features. If you are still using the compute API to manage the security groups, you can always refer to the nova.conf file for each compute node to set security_group_api = neutron. To associate the security groups to certain running instances, it might possible to use the nova client in the following way: # nova add-secgroup test-vm1 PacktPub_SG The following code illustrates the new association of the packtPub_SG security group with the test-vm1 instance: # nova show test-vm1   The following is the result of the preceding command: One of the best practices to troubleshoot connection issues for the running instances is to start checking the iptables running in the compute node. Eventually, any rule that was added to a security group will be applied to the iptables chains in the compute node. We can check the updated iptables chains in the compute host after applying the security group rules by using the following command: # iptables-save The preceding command yields the following output: The highlighted rules describe the direction of the packet and the rule that is matched. For example, the inbound traffic to the f7fabcce-f interface will be processed by the neutron-openvswi-if7fabcce-f chain. It is important to know how iptables rules work in Linux. Updating the security groups will also perform changes in the iptable chains. Remember that chains are a set of rules that determine how packets should be filtered. Network packets traverse rules in chains, and it is possible to jump to another chain. You can find different chains per compute host, depending on the network filter setup. If you have already created your own security groups, a series of iptables and chains are implemented on every compute node that hosts the instance that is associated within the applied corresponding security group. The following example demonstrates a sample update in the current iptables of a compute node that runs instances within the 10.10.10.0/24 subnet and assigns 10.10.10.2 as a default gateway for the former instances IP ranges: The last rule that was shown in the preceding screenshot dictates how the flow of the traffic leaving the f7fabcce-finterface must be sourced from 10.10.10.2/32 and the FA:16:3E:7E:79:64 MAC address. The former rule is useful when you wish to prevent an instance from issuing a MAC/IP address spoofing. It is possible to test ping and SSH to the instance via the router namespace in the following way: # ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 ping 10.10.10.2 The preceding command provides the following output: The testing of an SSH to the instance can be done by using the sane router namespace, as follows: # ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 ssh cirros@10.10.10.2 The preceding command produces the following output: Web servers DMZ example In the current example, we will show a simple setup of a security group that might be applied to a pool of web servers that are running in the Compute01, Compute02 and Compute03 nodes. We will allow inbound traffic from the Internet to access WebServer01, AppServer01, and DNSServer01 over HTTP and HTTPS. This is depicted in the following diagram: Let's see how we can restrict the traffic ingress/egress via Neutron API: $ neutron security-group-create DMZ_Zone --description "allow web traffic from the Internet" $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 443 --port_range_max 443 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 3306 --port_range_max 53 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 From Horizon, we can see the following security rules group added: To conclude, we have looked at presenting different security layouts by using Neutron. At this point, you should be comfortable with security groups and their use cases. Further your OpenStack knowledge by designing, deploying, and managing a scalable OpenStack infrastructure with Mastering OpenStack
Read more
  • 0
  • 0
  • 5197

article-image-building-custom-heat-resources
John Belamaric
05 Feb 2016
9 min read
Save for later

Building custom Heat resources

John Belamaric
05 Feb 2016
9 min read
OpenStack Heat orchestration makes it easy to build templates for application deployment and auto-scaling. The built-in resource types offer access to many of the existing OpenStack services. However, you may need to integrate with an internal CMDB or service registry, or configure some other services outside of OpenStack, as you launch your application. In this post I will explain how you can add your own custom Heat resources to extend Heat orchestration to meet your needs. As example code, I’ll use the Heat resources we developed at Infoblox, which can be found at http://github.com/infobloxopen/heat-infoblox. In our use case, we have an existing management interface for our DNS services, called the grid. In order to scale up the DNS service, we need to orchestrate the addition of members to our grid by making RESTful APIs calls to the grid master. We built custom Heat resource types to set up the grid to properly configure the new member to serve DNS. These custom resources perform the following operations: Tell the grid master about the new member that will be joining. Configure the networking for one or more interfaces on the member. Configure the licenses for each member. Enable the DNS service on the new member. Configure the “name server group” for the member – that is, configure which zones the member will serve. With these resources, we can scale up the DNS service for particular sets of domains with a simple Heat CLI command, or even auto-scale based upon the load seen on the instances. We use two different resource types for this, with the Infoblox::Grid::Member handling 1-4, and Infoblox::Grid::NameServerGroupMember handling 5. So, what do we need to do to build a Heat resource? First, we need to understand the main features of a resource. From a developer standpoint, each resource consists of a property schema, an attribute schema, a set of lifecycle methods, and a resource identifier. It is important to think about whatever actions you need to take in terms of a resource that can be created, updated, or deleted. That is, the way Heat works is to manage resources; sometimes configuration doesn’t fit neatly into that concept, but you’ll need to find a way to define resources that make sense even so. Properties are the inputs to the resource creation and update processes, and are specified by the user in the template when utilizing the resource. Attributes, on the other hand, are the run-time data values associated with an existing resource. For example, the Infoblox::Grid::Member resource type, defined in the heat_infoblox/resources/grid_member.py file, has properties such as name and port, but its attributes include the user data to inject during Nova boot. That user data is actually generated on the fly by the resource when it is needed. The lifecycle methods are called by Heat to create, update, delete, or validate the resource. This is where all the critical logic resides. The resource identifier is generated by the create method, and is used as the input for the delete method or other methods that operate on an existing resource. Thus, it is critical that the resource id value provides a unique reference to the resource. When building a new resource type, the first thing to do is understand what the critical properties are that the user will need to set. These are defined in the properties_schema (this snippet is from Infoblox::Grid::Member code in the stable/juno branch of the heat-infoblox project; there are some small differences in more recent versions of Heat): properties_schema = { NAME: properties.Schema( properties.Schema.STRING, _('Member name.'), ), MODEL: properties.Schema( properties.Schema.STRING, _('Infoblox model name.'), constraints=[ constraints.AllowedValues(ALLOWED_MODELS) ] ), LICENSES: properties.Schema( properties.Schema.LIST, _('List of licenses to pre-provision.'), schema=properties.Schema( properties.Schema.STRING ), constraints=[ constraints.AllowedValues(ALLOWED_LICENSES_PRE_PROVISION) ] ), …etc… Each property in turn has its own schema that describes its type, any constraints on the input values, whether the property is required or optional, and a default value if appropriate. In many cases, the property itself may be another dictionary with many different additional options that each in turn have their own schema. Each property or sub-property should also include a clear description. These descriptions are shown to the user in the newer versions of Horizon, and are critical to making the resource type useful. Next, you’ll need to understand what attributes are needed, if any. Attributes aren’t always necessary, but may be needed if the new resource is to be consumed as input to subsequent resources. For example, the Infoblox::Grid::Member resource has a user_data attribute, which is fed into the OS::Nova::Server user_data property when spinning up the Nova instance for the new member. Like properties, attributes are specified with a schema: attributes_schema = { USER_DATA: attributes.Schema( _('User data for the Nova boot process.')), NAME_ATTR: attributes.Schema( _('The member name.')) } In this case, however, the schema is simpler. Since it is essentially just documenting the outputs for use by template authors, there is no need to specify constraints, defaults, or even data types. Like the properties example, the code snippet above is from the Juno version of Heat-Infoblox. The newer version allows you to specify a type, though it is still not required. Finally, you need to specify the lifecycle methods. The handle_create and handle_delete methods are critical and must be implemented. There are a number of other handler methods that can be optionally implemented: handle_update, handle_suspend, and handle_resume are the most commonly implemented. If one of these operations happens asynchronously (such as launching a Nova instance), then you can utilize the handle_<action>_complete method, which will be repeatedly called in a loop until it returns True, after the handle_<action> method is called. Let’s take a closer look at the handle_create method defined by Infoblox::Grid::Member. Here is the complete code of this method: def handle_create(self): mgmt = self._make_port_network_settings(self.MGMT_PORT) lan1 = self._make_port_network_settings(self.LAN1_PORT) lan2 = self._make_port_network_settings(self.LAN2_PORT) name = self.properties[self.NAME] nat = self.properties[self.NAT_IP] self.infoblox().create_member(name=name, mgmt=mgmt, lan1=lan1, lan2=lan2, nat_ip=nat) self.infoblox().pre_provision_member( name, hwmodel=self.properties[self.MODEL], hwtype='IB-VNIOS', licenses=self.properties[self.LICENSES]) dns = self.properties[self.DNS_SETTINGS] if dns: self.infoblox().configure_member_dns( name, enable_dns=dns['enable'] ) self.resource_id_set(name) Breaking this down, we see the first thing it does is convert the properties into a format understood by the Infoblox RESTful API: mgmt = self._make_port_network_settings(self.MGMT_PORT) lan1 = self._make_port_network_settings(self.LAN1_PORT) lan2 = self._make_port_network_settings(self.LAN2_PORT) The _make_port_network_settings here will actually call out to the Neutron API to gather details about the port, and return a JSON structure that represents the configuration of those ports. def _make_port_network_settings(self, port_name): if self.properties[port_name] is None: return None port = self.client('neutron').show_port( self.properties[port_name])['port'] if port is None: return None ipv4 = None ipv6 = None for ip in port['fixed_ips']: if ':' in ip['ip_address'] and ipv6 is None: ipv6 = self._make_ipv6_settings(ip) else: if ipv4 is None: ipv4 = self._make_network_settings(ip) return {'ipv4': ipv4, 'ipv6': ipv6} After that, it calls the methods that interface with the Infoblox API, passing in the properly formatted data that was created based upon the resource properties: name = self.properties[self.NAME] nat = self.properties[self.NAT_IP] self.infoblox().create_member(name=name, mgmt=mgmt, lan1=lan1, lan2=lan2, nat_ip=nat) self.infoblox().pre_provision_member( name, hwmodel=self.properties[self.MODEL], hwtype='IB-VNIOS', licenses=self.properties[self.LICENSES]) dns = self.properties[self.DNS_SETTINGS] if dns: self.infoblox().configure_member_dns( name, enable_dns=dns['enable'] ) Finally, it must set the resource_id value for the resource. This must be unique to the type of resource, so that the handle_delete method will know the appropriate resource to act upon. In our case, the name is sufficient, so we use that: self.resource_id_set(name) Once a resource is created, the template may want to access the attributes we defined for that resource. To make these accessible, you just need to override the _resolve_attribute method, which takes the name of the attribute to resolve. def _resolve_attribute(self, name): member_name = self.resource_id member = self.infoblox().get_member( member_name, return_fields=['host_name', 'vip_setting', 'ipv6_setting'])[0] token = self._get_member_tokens(member) LOG.debug("MEMBER for %s = %s" % (name, member)) if name == self.USER_DATA: return self._make_user_data(member, token) if name == self.NAME_ATTR: return member['host_name'] return None This is called an instance method, so the resource_id is available in the object itself. In our case, we call the Infoblox RESTful API to query for the details about the member referenced in the resource_id, then use that data to generate the attribute requested. That is really all there is to a Heat resource. The short version is: define the resource ID, attributes, and properties then use the properties in the RESTful API calls in the handle_* and _resolve_attribute methods, to manage your custom resource. Continue reading our resources to become an OpenStack master. Next up, how to present different security layouts in Neutron. From 4th to 10th April save 50% on some of our top cloud eBooks. From OpenStack to AWS, we've got a range of titles to help you unlock cloud's transformative potential! Find them all here. About the author John Belamaric is a software and systems architect with nearly 20 years of software design and development experience. His current focus is on cloud network automation. He is a key architect of the Infoblox Cloud products, concentrating on OpenStack integration and development. He brings to this his experience as the lead architect for the Infoblox Network Automation product line, along with a wealth of networking, network management, software, and product design knowledge. He is a contributor to both the OpenStack Neutron and Designate projects. He lives in Bethesda, Maryland with his wife Robin and two children, Owen and Audrey.
Read more
  • 0
  • 0
  • 2299
article-image-alfresco-platform
Packt
04 Feb 2016
8 min read
Save for later

The Alfresco Platform

Packt
04 Feb 2016
8 min read
In this article by Jeffrey T. Potts and Snehal K Shah, authors of Alfresco Developer Guide, Second Edition, we will discuss the Alfresco architecture. (For more resources related to this topic, see here.) Alfresco architecture Many of Alfresco's competitors, particularly in the closed-source space, have sprawling footprints composed of multiple, sometimes competing, technologies that have been acquired and integrated over time. Some have undergone massive infrastructure overhauls over the years, resulting in bizarre vestigial tails of sorts. Luckily, Alfresco doesn't suffer from these traits. On the contrary, Alfresco's architecture is advantageous for the following reasons: It is relatively straightforward It is built with state of the art frameworks and open source components It supports several important content management and related standards Let's look at each of these characteristics, starting with a high-level look at the Alfresco architecture. High-level architecture The following diagram shows Alfresco's high-level architecture: The important takeaways at this point are as follows: There are many ways to get content into or out of a repository, whether it's via the protocols on the left-hand side of the diagram or the APIs on the right-hand side. Alfresco runs as a web application within a servlet container. In the current release, the web client runs in the same process as the content repository itself. Customizations and extensions run as part of the Alfresco web application. An extension mechanism separates customizations from the core product to keep the path clear for future upgrades. Metadata resides in a relational database, while content files and Solr/Lucene indexes reside on the filesystem. The diagram shows the content residing on the same physical filesystem as Alfresco but other types of file storage could be used as well. The WCM Virtualization Server is an instance of Tomcat with Alfresco configurations and JAR files. The Virtualization Server is used to serve live previews of the website[SC2]  as it is being worked on. It can run on the same physical machine as Alfresco, or it can be split to show a separate node as well. Add-ons Add-ons are pieces of functionality not found in the core Alfresco distribution. If you are working with a binary distribution, this means that you'll have additional files to download and install on top of the base Alfresco installation. Add-ons are provided by Alfresco, third-party software vendors, and members of the Alfresco community, such as partners and customers. Alfresco makes several add-on modules available for the taking, such as Records Management and Facebook integration. Kofax, a software vendor, provides add-on software that integrates Alfresco with Kofax imaging solutions. Members of the Alfresco community create and share add-on modules via the Alfresco Forge, a website that Alfresco has set up for this purpose. However, a majority of what is available on it are language packs that are used to localize the Alfresco web client. Open source components One of the reasons Alfresco has been able to create a viable offering so quickly is because they didn't start it from scratch. Alfresco's engineers assembled the product from many finer-grained open source components. Instead of reinventing the wheel, they used proven components. This saved them time, of course, but it also resulted in a more robust, standards-based product. It also eases the transition for people who are new to the platform. If a developer already knows JavaServer Faces or Spring, for example, many of the customization concepts are going to be familiar to them. The following table lists some of the major open source components used to build Alfresco: Open source components What the open source component does in Alfresco Apache Lucene (http://lucene.apache.org/) Full text and metadata search. Apache Solr (http://lucene.apache.org/solr/) Alfresco allow the usage of the Solr index instead of Lucene. Hibernate (http://www.hibernate.org/) iBatis (http://ibatis.apache.org/) Database persistence. Both are supported by Alfresco. Apache MyFaces (http://myfaces.apache.org/) JavaServer Faces components in the web client . FreeMarker (http://freemarker.org/) Web script framework views, custom views in the web client, Web client dashlets, and e-mail templates. Mozilla Rhino JavaScript Engine (http://www.mozilla.org/rhino/) Web script framework controllers, server-side JavaScript, and actions OpenSymphony Quartz (http://www.opensymphony.com/quartz/) Scheduling of asynchronous processes. Spring ACEGI (http://www.acegisecurity.org/) Security (authorization), roles, and permissions. Apache Axis (http://ws.apache.org/axis/) Web services. OpenOffice.org (http://www.openoffice.org/) Conversion of office documents to the PDF format. Apache FOP (http://xmlgraphics.apache.org/fop/) Transformation of XSL:FO to the PDF format. Apache POI (http://poi.apache.org/) Metadata extraction from Microsoft Office files. JBoss jBPM (http://www.jboss.com/products/jbpm) Advanced workflow. Activiti (http://activiti.org/) Advanced workflow. ImageMagick (http://www.imagemagick.org) Image file manipulation. Chiba (http://chiba.sourceforge.net/) Web form generation that's based on XForms. Spring Surf (http://www.springsurf.org/) This is used by Alfresco Share.   Developers looking to contribute significant product enhancements to Alfresco or those making major, deep customizations to the product may require experience with a particular component, depending on exactly what they are trying to do. Everyone else will be able to customize and extend Alfresco using basic Java and web application development skills. Major standards and protocols[SC6] that are supported Software vendors love buzzwords. As new acronyms climb the hype cycle, vendors scramble to figure out how they can at least appear to support the standard or protocol so that prospective clients can check that box on the[SC7]  RFP (don't even get me started on RFPs). Commercial open source vendors are still software vendors and are thus no less guilty of this practice. But because open source software is developed in the open by a community of developers, its compliance to standards tends to be more genuine. It makes more sense for an open source project to implement a standard than go off in some new direction because it saves time, promotes interoperability with other open source projects, and stays true to what open source is all about—freedom and choice. Here, then, are the significant standards and protocols that Alfresco supports: Standard/porotocol Comment FTP Content can be contributed to the repository via FTP. Secure FTP is not yet supported. WebDAV   WebDAV is an HTTP-based protocol that's commonly supported by content management vendors and is one way to make the repository look like a filesystem. CIFS CIFS lets the repository be mounted as a shared drive by other machines. As opposed to WebDAV, systems (and people) can't tell the difference between an Alfresco repository mounted as a shared drive through CIFS and a traditional file server. The JCR API (JSR-170) JCR is a Java API that works with content repositories, such as Alfresco. Alfresco is a JCR-compliant repository. There are two levels of JCR compliance. Alfresco is level-1-compliant and almost level-2-compliant. The Portlet API (JSR-168) The Web Script Framework lets you define a RESTful API for the repository. Web Scripts can return XML, HTML, JSON, and JSR-168 portlets. In the current release, this requires the portal and Alfresco to run in the same JVM, but this restriction may go away in the near future. SOAP The Alfresco Web Services API uses SOAP-based web services. OpenSearch (http://www.opensearch.org) Alfresco repositories can be configured in the form of an OpenSearch data source, which allows Alfresco to participate in federated search queries. OpenSearch queries can be executed from the Web Client as well. This means that if your organization has several repositories that are OpenSearch-compliant (including non-Alfresco repositories), they can be searched from within the web client. XForms and XML Schema Web forms are defined using XML Schema. Not all XForms widgets are supported. XSLT and XSL:FO Web form data can be transformed using XSL 1.0. LDAP Alfresco can authenticate against an LDAP directory or a Microsoft Active Directory server. Summary In this article, we took a look at how Alfresco could be assembled with open source components, run as a web application within an application server, and exposed the repository through many different protocols and APIs. Alfresco can also be customized. We explored how some types of customization are very basic (more configuration than customization) and can be performed by end users through the web client. Others are more advanced and require coding. Advanced customizations is the subject of this book. Resources for Article:   Further resources on this subject: Core Ephesoft Features [article] Introducing Liferay for Your Intranet [article] Alfresco Web Scrpits [article]
Read more
  • 0
  • 0
  • 6231

article-image-introduction-akka
Packt
04 Feb 2016
15 min read
Save for later

Introduction to Akka

Packt
04 Feb 2016
15 min read
In this article written by Prasanna Kumar Sathyanarayanan and Suraj Atreya (the authors of this book, Reactive Programming with Scala and Akka), we will see what the Akka framework is all about in detail. Akka is a framework to write distributed, asynchronous, concurrent, and scalable applications. Akka actors react to the messages that are sent to them. Actors can be viewed as a passive component unless an external message is triggered. They are a higher level abstraction and provide a neat way to handle concurrency instead of a traditional multithreaded application. (For more resources related to this topic, see here.) One of the examples of Akka actor is handling request response in a web server. A web server typically handles millions of requests per second. These requests must be handled concurrently to cater to the user requests. One way is to have a pool of threads and let these threads accept the requests and hand it off to the actual worker threads. In this case, the thread pool has to be managed by the application developer including error handling, thread locking, and synchronization. Most often, the application logic is intertwined with the business logic. In case of the web server, the thread pool has to be manually handled. Whereas, using actors, the thread pool is managed by the Akka engine and actors receive messages asynchronously. Each request can be thought of as a message to the actor, and the actor reacts to the message. Actors are very light weight event-driven processes. Several million actors can exist within a GB of heap memory. Actor mailbox Every actor has a mailbox. Since actors communicate exclusively using messages, every actor maintains a queue of messages called mailbox. Therefore, an actor will read the messages in the order that it was sent. Actor systems An ActorSystem is a heavyweight process which is the first step before creating actors. During initialization, it allocates 1 to N threads per ActorSystem. Before creating an actor, an actor system must be created and this process involves the creation of a hierarchical structure of actors. Since an actor can create other actors, the handling of failure is also a vital part of the Akka engine, which handles it gracefully. This design helps to take action if an actor dies or has an unexpected exception for some reason. When an actor system is first created, three actors are created as shown in the figure: Root guardian (/): The root guardian is the grand parent of all the actors. This actor supervises all actors under the user actors. Since the root guardian is the supervisor of all the actors underneath it, there is no supervisor for the root guardian itself. Root guardian actor is not a real actor and it will terminate the actor system if it finds any throwables from its children. User ( /user): The user guardian actor supervises all the actors that are created by the users. If the user guardian actor terminates, all its children will also terminate. System guardian ( /system): This is a special actor that oversees the orderly shutdown of actors while logging remains active. This is achieved by having the system guardian watch the user guardian and initialize a shut-down sequence when the Terminated message is received. Message passing The figure at the end of this section shows the different steps that are involved when a message is passed to an actor. For example, let's assume there is a pizza website and a customer wants to order some pizzas. For simplicity, let's remove the non-essential details, such as billing and other information, and instead focus on just the order of a pizza. If a customer is some kind of an application (pizza customer) and the one who receives orders is a chef (pizza chef), then each request for a pizza can be illustrated as an asynchronous request to the chef. The figure shows how when a message is passed to an actor, all the different components such as mailbox and dispatcher does its job. Broadly these are explained in the following six steps when a message is passed to the actor: A PizzaCustomer creates and uses the ActorSystem. This is the first step before sending a message to an actor. The PizzaCustomer acquires a PizzaChef. In Akka, an actor is created using the actorOf(...) function call. Akka doesn't return the actual actor but instead returns a reference to the actor reference PizzaChef for safety. The PizzaRequest is sent to this PizzaChef. The PizzaChef sends this message to the Dispatcher. Dispatcher then enqueues the message into the PizzaChef's actor mailbox. Dispatcher then puts the mailbox on the thread. Finally, the mailbox dequeues and sends the message to the PizzaChef receive method. Creating an actor system An actor system is the first thing that should be created before creating actors. The actor system is created using the following API: val system = ActorSystem("Pizza") The string "Pizza" is just a name given to the actor system. Creating an ActorRef The following snippet shows the creation of an actor inside the previously created actor system: val pizzaChef: ActorRef = system.actorOf(Props[PizzaChef]) An actor is created using the actorOf(...) function call. The actorOf() call doesn't return the actor itself, instead returns a reference to the actor. Once an actor's reference is obtained, clients can send messages using the ActorRef. This is a safe way of communicating between actors since the state of the actor itself is not manipulated in any way. Sending a PizzaRequest to the ActorRef Now that we have an actor system and a reference to the actor, we would like to send requests to the pizzaChef actor reference. We send the message to an actor using the ! also called Tell. Here in the code snippet, we Tell the message MarinaraRequest to the pizza ActorRef: pizzaChef ! MarinaraRequest The Tell is also called as fire-forget. There is no acknowledgement returned from a Tell. When the message is sent to an actor, the actor's receive method will receive the message and processes it further. receive is a partial function and has the following signature: def receive: PartialFunction[Any, Unit] The return type receive suggests it is Unit and therefore this function is side effecting. The following code is what we discussed: import akka.actor.{Actor, ActorRef, Props, ActorSystem} sealed trait PizzaRequest case object MarinaraRequest extends PizzaRequest case object MargheritaRequest extends PizzaRequest class PizzaChef extends Actor {   def receive = {     case MarinaraRequest => println("I have a Marinara request!")     case MargheritaRequest => println("I have a Margherita request!")   } } object PizzaCustomer{     def main(args: Array[String]) : Unit = {     val system = ActorSystem("Pizza")     val pizzaChef: ActorRef = system.actorOf(Props[PizzaChef])     pizzaChef ! MarinaraRequest     pizzaChef ! MargheritaRequest   } } The preceding code shows the receive block that handles two kinds of requests; one for MarinaraRequest, and the other for MargheritaRequest. These two requests are defined as case objects. Actor message We saw when a message needs to be sent, ! (Tell) was used. But, we didn't discuss how exactly this message is processed. We will explore how the ideas of dispatcher and execution context are used to carry out the message passing techniques between actors. In the pizza example, we used two kinds of messages: MarinaraRequest and MargheritaRequest. For simplicity, all that these messages did was to print on the console. When PizzaCustomer sent PizzaRequest to PizzaChef ActorRef, the messages are sent to the dispatcher. The dispatcher then sends this message to the corresponding actor's mailbox. Mailbox Every time a new actor is created, a corresponding mailbox is also created. There are exceptions to this rule when multiple actors share a same mailbox. PizzaChef will have a mailbox. This mailbox stores the messages that appear asynchronously in a FIFO manner. Therefore, when a new message is sent to an actor, Akka guarantees that the messages are enqueued and dequeued in a FIFO manner. Here is the signature of the mailbox from the Akka source: It can be found at https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala private[akka] abstract class Mailbox(val messageQueue: MessageQueue)     extends ForkJoinTask[Unit]       with SystemMessageQueue with Runnable From this signature, we can see that Mailbox takes a MessageQueue as an input. Also, we can see that it extends Runnable and suggests that Mailbox is a thread. We will see why the Mailbox is a thread, in a bit. Dispatcher Dispatchers dispatch actors to threads. There is no one-to-one mapping between actors and threads. If that was the case, then the whole system would crumble under its own weight. Also, the amount of context switching would be much more than the actual work. Therefore, it is important to understand that creating a number of actors is not equal to creating the same number of threads. The main objective of dispatcher is to coordinate between the actors and its messages to the underlying threads. A dispatcher picks the actor next in the queue based on the dispatcher policy and the actor's message in the queue. These two are then passed on to one of the available threads in the execution context. To illustrate this point, let's see the code snippet: protected[akka] override   def registerForExecution(mbox: Mailbox, ...): Boolean = {   ...      if (mbox.setAsScheduled()) {      try {          executorService execute mbox          true       }   } This code snippet shows us that the dispatcher accepts a Mailbox as a parameter and has ExecutorService wrapped around to execute the mailbox. We saw that Mailbox is a thread and the dispatcher executes this Mailbox against this ExecutorService. When the mailbox's run method is triggered, it dequeues a message from Mailbox and passes it to the actor for processing. This is the code snippet of run from Mailbox.scala from the Akka source code: The source code can be found at https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala override final def run(): Unit = {     try {       if (!isClosed) { //Volatile read, needed here         processAllSystemMessages() //First, deal with any system messages         processMailbox() //Then deal with messages       }     } finally {       setAsIdle() //Volatile write, needed here       dispatcher.registerForExecution(this, false, false)     }   } Actor Path The interesting thing about actor system is that actors are created in a hierarchical manner. All the user created actors are created under the /user actor. The actor path looks very similar to the UNIX file system hierarchy, for example, /home/akka/akka_book. We will see how this is similar to the Akka path in the following code example. Let's take our pizza example, and let's add a few toppings on the pizza. So, whenever a customer gives MarinaraRequest, he will get extra cheese too: class PizzaToppings extends Actor{   def receive = {     case ExtraCheese => println("Aye! Extra cheese it is")     case Jalapeno => println("More Jalapenos!")   } } class PizzaSupervisor extends Actor {   val pizzaToppings =         context.actorOf(Props[PizzaToppings], "PizzaToppings")   def receive = {     case MarinaraRequest   =>       println("I have a Marinara request with extra cheese!")       println(pizzaToppings.path)       pizzaToppings ! ExtraCheese     case MargheritaRequest =>       println("I have a Margherita request!")     case PizzaException    =>      throw new Exception("Pizza fried!")   } } The PizzaSupervisor class is very similar to our earlier example of pizza actor. However, if you observe carefully, there is another actor created within this PizzaSupervisor called the PizzaToppings actor. This PizzaToppings is created using context.actorOf(...) instead of using system.actorOf(...). Therefore, PizzaToppings will become the child of PizzaSupervisor. The actor path of PizzaSupervisor will look like this: akka://Pizza/user/PizzaSupervisor The actor path for PizzaToppings will look like this: akka://Pizza/user/PizzaSupervisor/PizzaToppings When this main program is run, the actor system is created using system.actorOf(...) and prints the path of the pizza actor system and its corresponding child as shown previously: object TestActorPath {   def main(args: Array[String]): Unit = {     val system = ActorSystem("Pizza")     val pizza: ActorRef =  system.actorOf(Props[PizzaSupervisor],                    "PizzaSupervisor")     println(pizza.path)     pizza ! MarinaraRequest     system.shutdown()   } } The following is the output:akka://Pizza/user/PizzaSupervisorI have a Marinara request with extra cheese!akka://Pizza/user/PizzaSupervisor/PizzaToppingsAye! Extra cheese it is The name akka in the actor path is fixed and the actors created are shown under the user. If you remember from earlier discussions, all the user-created actors are created under the user guardian. Therefore, the actor path shows that it is a user-created actor. The name pizza is the name we gave to the actor system while it was being created. Therefore, the hierarchy explains that pizza is the actor system and all the actors are children below it. In the following figure, we can clearly see the actor hierarchy and its actor path: Actor lifecycle Akka actors have a life cycle that is very useful for writing a bug free concurrent code. Akka follows a philosophy of Let it crash and it is assumed that actors too can crash. But if an actor crashes, several actions can be taken including restarting it. As usual let's look at our pizza baking process. As before, we will have an actor to accept the pizza requests. But, this time we will see the workflow of the pizza baking process! Using this example, we will see how the actor life cycle works: class PizzaLifeCycle extends Actor with ActorLogging {   override def preStart() = {     log.info("Pizza request received!")   }   def receive = {     case MarinaraRequest   => log.info("I have a Marinara request!")     case MargheritaRequest => log.info("I have a Margherita request!")     case PizzaException    => throw new Exception("Pizza fried!")   }  //Old actor instance   override def preRestart(reason: Throwable, message: Option[Any]) = {     log.info("Pizza baking restarted because " + reason.getMessage)     postStop()   }   //New actor instance   override def postRestart(reason: Throwable) = {     log.info("New Pizza process started because earlier " + reason.getMessage)     preStart()   }   override def postStop() = {     log.info("Pizza request finished")   } } The PizzaLifeCycle actor takes pizza requests but with additional states. An actor can go through many different states during its lifetime. Let's send some messages to find out what happens with our PizzaLifeCycle actor and how it behaves:     pizza ! MarinaraRequest         pizza ! PizzaException         pizza ! MargheritaRequest Here is the output for the preceding requests sent:Pizza request received!I have a Marinara request!Pizza fried!java.lang.Exception: Pizza fried!at PizzaLifeCycle$$anonfun$receive$1.applyOrElse(PizzaLifeCycle.scala:12)at akka.actor.Actor$class.aroundReceive(Actor.scala:467)at PizzaLifeCycle.aroundReceive(PizzaLifeCycle.scala:3)at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)Pizza baking restarted because Pizza fried!Pizza request finishedNew Pizza process started because Pizza fried!Pizza request received!I have a Margherita request! When we sent our first MarinaraRequest request, we see the following in the log we received: Pizza request received! I have a Marinara request! Akka called the preStart() method and then entered the receive block. Then, we simulated an exception by sending PizzaException and as expected, we got an exception: Pizza fried!java.lang.Exception: Pizza fried!  at PizzaLifeCycle$$anonfun$receive$1.applyOrElse(PizzaLifeCycle.scala:12)  at akka.actor.Actor$class.aroundReceive(Actor.scala:467)  at PizzaLifeCycle.aroundReceive(PizzaLifeCycle.scala:3)  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)Pizza baking restarted because Pizza fried!Pizza request finished There are some interesting things to note here. Although we got an exception Pizza fried!, we also got two other log messages. The reason for this is quite simple. When we have an exception, Akka called preRestart(). During preRestart(), is called on the old instance of the actor and have a chance to clean-up some of the resources here. But in our example, we just called postStop(). During preRestart(), the old instance prepares to handoff to the new actor instance. Finally, we sent another request called MargheritaRequest. Here, we get these log messages: New Pizza process started because Pizza fried!Pizza request received!I have a Margherita request! We saw that the old instance actor was stopped. Here, the requests are handled by a new actor instance. The postRestart()is now called on the new actor instance, which calls preStart() to resume normal operations of our pizza baking process. During the preRestart() and postRestart() methods, we got a reason as to why the old actor died. Summary In this article, you learned about the details of the Akka framework, the actor mailbox, actor systems, how to create an actor system and ActorRef, how to send PizzaRequest to ActorRef, and so on. Resources for Article:   Further resources on this subject: The Design Patterns Out There and Setting Up Your Environment [article] Creating First Akka Application [article] Content-based recommendation [article]
Read more
  • 0
  • 0
  • 2567
Modal Close icon
Modal Close icon