Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-transactions-redis
Packt
07 Jul 2015
9 min read
Save for later

Transactions in Redis

Packt
07 Jul 2015
9 min read
In this article by Vinoo Das author of the book Learning Redis, we will see how Redis as a NOSQL data store, provides a loose sense of transaction. As in a traditional RDBMS, the transaction starts with a BEGIN and ends with either COMMIT or ROLLBACK. All these RDBMS servers are multithreaded, so when a thread locks a resource, it cannot be manipulated by another thread unless and until the lock is released. Redis by default has MULTI to start and EXEC to execute the commands. In case of a transaction, the first command is always MULTI, and after that all the commands are stored, and when EXEC command is received, all the stored commands are executed in sequence. So inside the hood, once Redis receives the EXEC command, all the commands are executed as a single isolated operation. Following are the commands that can be used in Redis for transaction: MULTI: This marks the start of a transaction block EXEC: This executes all the commands in the pipeline after MULTI WATCH: This watches the keys for conditional execution of a transaction UNWATCH: This removes the WATCH keys of a transaction DISCARD: This flushes all the previously queued commands in the pipeline (For more resources related to this topic, see here.) The following figure represents how transaction in Redis works: Transaction in Redis Pipeline versus transaction As we have seen for many generic terms in pipeline the commands are grouped and executed, and the responses are queued in a block and sent. But in transaction, until the EXEC command is received, all the commands received after MULTI are queued and then executed. To understand that, it is important to take a case where we have a multithreaded environment and see the outcome. In the first case, we take two threads firing pipelined commands at Redis. In this sample, the first thread fires a pipelined command, which is going to change the value of a key multiple number of times, and the second thread will try to read the value of that key. Following is the class which is going to fire the two threads at Redis: MultiThreadedPipelineCommandTest.java: package org.learningRedis.chapter.four.pipelineandtx; public class MultiThreadedPipelineCommandTest { public static void main(String[] args) throws InterruptedException {    Thread pipelineClient = new Thread(new PipelineCommand());    Thread singleCommandClient = new Thread(new SingleCommand());    pipelineClient.start();    Thread.currentThread().sleep(50);    singleCommandClient.start(); } } The code for the client which is going to fire the pipeline commands is as follows: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; import Redis.clients.jedis.Pipeline; public class PipelineCommand implements Runnable{ Jedis jedis = ConnectionManager.get(); @Override public void run() {      long start = System.currentTimeMillis();      Pipeline commandpipe = jedis.pipelined();      for(int nv=0;nv<300000;nv++){        commandpipe.sadd("keys-1", "name"+nv);      }      commandpipe.sync();      Set<String> data= jedis.smembers("keys-1");      System.out.println("The return value of nv1 after pipeline [ " + data.size() + " ]");    System.out.println("The time taken for executing client(Thread-1) "+ (System.currentTimeMillis()-start));    ConnectionManager.set(jedis); } } The code for the client which is going to read the value of the key when pipeline is executed is as follows: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; public class SingleCommand implements Runnable { Jedis jedis = ConnectionManager.get(); @Override public void run() {    Set<String> data= jedis.smembers("keys-1");    System.out.println("The return value of nv1 is [ " + data.size() + " ]");    ConnectionManager.set(jedis); } } The result will vary as per machine configuration but by changing the thread sleep time and running the program couple of times, the result will be similar to the one shown as follows: The return value of nv1 is [ 3508 ] The return value of nv1 after pipeline [ 300000 ] The time taken for executing client(Thread-1) 3718 Please fire FLUSHDB command every time you run the test, otherwise you end up seeing the value of the previous test run, that is 300,000 Now we will run the sample in a transaction mode, where the command pipeline will be preceded by MULTI keyword and succeeded by EXEC command. This client is similar to the previous sample where two clients in separate threads will fire commands to a single key on Redis. The following program is a test client that gives two threads one with commands in transaction mode and the second thread will try to read and modify the same resource: package org.learningRedis.chapter.four.pipelineandtx; public class MultiThreadedTransactionCommandTest { public static void main(String[] args) throws InterruptedException {    Thread transactionClient = new Thread(new TransactionCommand());    Thread singleCommandClient = new Thread(new SingleCommand());    transactionClient.start();    Thread.currentThread().sleep(30);    singleCommandClient.start(); } } This program will try to modify the resource and read the resource while the transaction is going on: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; public class SingleCommand implements Runnable { Jedis jedis = ConnectionManager.get(); @Override public void run() {    Set<String> data= jedis.smembers("keys-1");    System.out.println("The return value of nv1 is [ " + data.size() + " ]");    ConnectionManager.set(jedis); } } This program will start with MULTI command, try to modify the resource, end it with EXEC command, and later read the value of the resource: package org.learningRedis.chapter.four.pipelineandtx; import java.util.Set; import Redis.clients.jedis.Jedis; import Redis.clients.jedis.Transaction; import chapter.four.pubsub.ConnectionManager; public class TransactionCommand implements Runnable { Jedis jedis = ConnectionManager.get(); @Override public void run() {      long start = System.currentTimeMillis();      Transaction transactionableCommands = jedis.multi();      for(int nv=0;nv<300000;nv++){        transactionableCommands.sadd("keys-1", "name"+nv);      }      transactionableCommands.exec();      Set<String> data= jedis.smembers("keys-1");      System.out.println("The return value nv1 after tx [ " + data.size() + " ]");    System.out.println("The time taken for executing client(Thread-1) "+ (System.currentTimeMillis()-start));    ConnectionManager.set(jedis); } } The result of the preceding program will vary as per machine configuration but by changing the thread sleep time and running the program couple of times, the result will be similar to the one shown as follows: The return code is [ 1 ] The return value of nv1 is [ null ] The return value nv1 after tx [ 300000 ] The time taken for executing client(Thread-1) 7078 Fire the FLUSHDB command every time you run the test. The idea is that the program should not pick up a value obtained because of a previous run of the program. The proof that the single command program is able to write to the key is if we see the following line: The return code is [1]. Let's analyze the result. In case of pipeline, a single command reads the value and the pipeline command sets a new value to that key as evident in the following result: The return value of nv1 is [ 3508 ] Now compare this with what happened in case of transaction when a single command tried to read the value but it was blocked because of the transaction. Hence the value will be NULL or 300,000. The return value of nv1 after tx [0] or The return value of nv1 after tx [300000] So the difference in output can be attributed to the fact that in a transaction, if we have started a MULTI command, and are still in the process of queueing commands (that is, we haven't given the server the EXEC request yet), then any other client can still come in and make a request, and the response would be sent to the other client. Once the client gives the EXEC command, then all other clients are blocked while all of the queued transaction commands are executed. Pipeline and transaction To have a better understanding, let's analyze what happened in case of pipeline. When two different connections made requests to the Redis for the same resource, we saw a result where client-2 picked up the value while client-1 was still executing: Pipeline in Redis in a multi connection environment What it tells us is that requests from the first connection which is pipeline command is stacked as one command in its execution stack, and the command from the other connection is kept in its own stack specific to that connection. The Redis execution thread time slices between these two executions stacks, and that is why client-2 was able to print a value when the client-1 was still executing. Let's analyze what happened in case of transaction here. Again the two commands (transaction commands and GET commands) were kept in their own execution stacks, but when the Redis execution thread gave time to the GET command, and it went to read the value, seeing the lock it was not allowed to read the value and was blocked. The Redis execution thread again went back to executing the transaction commands, and again it came back to GET command where it was again blocked. This process kept happening until the transaction command released the lock on the resource and then the GET command was able to get the value. If by any chance, the GET command was able to reach the resource before the transaction lock, it got a null value. Please bear in mind that Redis does not block execution to other clients while queuing transaction commands but blocks only during executing them. Transaction in Redis multi connection environment This exercise gave us an insight into what happens in the case of pipeline and transaction. Summary In this article, we saw in brief how to use Redis, not simply as a datastore, but also as pipeline the commands which is so much more like bulk processing. Apart from that, we covered areas such as transaction, messaging, and scripting. We also saw how to combine messaging and scripting, and create reliable messaging in Redis. This capability of Redis makes it different from some of the other datastore solutions. Resources for Article: Further resources on this subject: Implementing persistence in Redis (Intermediate) [article] Using Socket.IO and Express together [article] Exploring streams [article]
Read more
  • 0
  • 1
  • 4199

article-image-installation-and-setup
Packt
07 Jul 2015
15 min read
Save for later

Installation and Setup

Packt
07 Jul 2015
15 min read
The Banana Pi is a single-board computer, which enables you to build your own individual and versatile system. In fact, it is a complete computer, including all the required elements such as a processor, memory, network, and other interfaces, which we are going to explore. It provides enough power to run even relatively complex applications suitably. In this article by, Ryad El-Dajani, author of the book, Banana Pi Cookbook, we are going to get to know the Banana Pi device. The available distributions are mentioned, as well as how to download and install these distributions. We will also examine Android in contrast to our upcoming Linux adventure. (For more resources related to this topic, see here.) Thus, you are going to transform your little piece of hardware into a functional, running computer with a working operating system. You will master the whole process of doing the required task from connecting the cables, choosing an operating system, writing the image to an SD card, and successfully booting up and shutting down your device for the first time. Banana Pi Overview In the following picture, you see a Banana Pi on the left-hand side and a Banana Pro on the right-hand side: As you can see, there are some small differences that we need to notice. The Banana Pi provides a dedicated composite video output besides the HDMI output. However, with the Banana Pro, you can connect your display via composite video output using a four-pole composite audio/video cable on the jack. In contrast to the Banana Pi, which has 26 pin headers, the Banana Pro provides 40 pins. Also the pins for the UART port interface are located below the GPIO headers on the Pi, while they are located besides the network interface on the Pro. The other two important differences are not clearly visible on the previous picture. The operating system for your device comes in the form of image files that need to be written (burned) to an SD card. The Banana Pi uses normal SD cards while the Banana Pro will only accept Micro SD cards. Moreover, the Banana Pro provides a Wi-Fi interface already on board. Therefore, you are also able to connect the Banana Pro to your wireless network, while the Pi would require an external wireless USB device. Besides the mentioned differences, the devices are very similar. You will find the following hardware components and interfaces on your device. On the back side, you will find: A20 ARM Cortex-A7 dual core central processing unit (CPU) ARM Mali400 MP2 graphics processing unit (GPU) 1 gigabyte of DDR3 memory (that is shared with the GPU) On the front side, you will find: Ethernet network interface adapter Two USB 2.0 ports A 5V micro USB power with DC in and a micro USB OTG port A SATA 2.0 port and SATA power output Various display outputs [HDMI, LVDS, and composite (integrated into jack on the Pro)] A CSI camera input connector An infrared (IR) receiver A microphone Various hardware buttons on board (power key, reset key, and UBoot key) Various LEDs (red for power status, blue for Ethernet status, and green for user defined) As you can see, you have a lot of opportunities for letting your device interact with various external components. Operating systems for the Banana Pi The Banana Pi is capable of running any operating system that supports the ARM Cortex-A7 architecture. There are several operating systems precompiled, so you are able to write the operating system to an SD card and boot your system flawlessly. Currently, there are the following operating systems provided officially by LeMaker, the manufacturer of the Banana Pi. Android Android is a well-known operating system for mobile phones, but it is also runnable on various other devices such as smart watches, cars, and, of course, single-board computers such as the Banana Pi. The main advantage of running Android on a single-board computer is its convenience. Anybody who uses an Android-based smartphone will recognize the graphical user interface (GUI) and may have less initial hurdles. Also, setting up a media center might be easier to do on Android than on a Linux-based system. However, there are also a few disadvantages, as you are limited to software that is provided by an Android store such as Google Play. As most apps are optimized for mobile use at the moment, you will not find a lot of usable software for your Banana Pi running Android, except some Games and Multimedia applications. Moreover, you are required to use special Windows software called PhoenixCard to be able to prepare an Android SD card. In this article, we are going to ignore the installing of Android. For further information, please see Installing the Android OS image (LeMaker Wiki) at http://wiki.lemaker.org/BananaPro/Pi:SD_card_installation. Linux Most of the Linux users never realize that they are actually using Linux when operating their phones, appliances, routers, and many more products, as most of its magic happens in the background. We are going to dig into this adventure to discover its possibilities when running on our Banana Pi device. The following Linux-based operating systems—so-called distributions—are used by the majority of the Banana Pi user base and are supported officially by the manufacturer: Lubuntu: This is a lightweight distribution based on the well-known Ubuntu using the LXDE desktop, which is principally a good choice, if you are a Windows user. Raspbian: This is a distribution based on Debian, which was initially produced for the Raspberry Pi (hence the name). As a lot of Raspberry Pi owners are running Raspbian on their devices while also experimenting with the Banana Pi, LeMaker ported the original Raspbian distribution to the Banana Pi. Raspbian also comes with an LXDE desktop by default. Bananian: This too is a Debian-based Linux distribution optimized exclusively for the Banana Pi and its siblings. All of the aforementioned distributions are based on the well-known distribution, Debian. Besides the huge user base, all Debian-based distributions use the same package manager Apt (Advanced Packaging Tool) to search for and install new software, and all are similar to use. There are still more distributions that are officially supported by LeMaker, such as Berryboot, LeMedia, OpenSUSE, Fedora, Gentoo, Scratch, ArchLinux, Open MediaVault, and OpenWrt. All of them have their pros and cons or their specific use cases. If you are an experienced Linux user, you may choose your preferred distribution from the mentioned list, as most of the recipes are similar to, or even equally usable on, most of the Linux-based operating systems. Moreover, the Banana Pi community publishes various customized Linux distributions for the Banana Pi regularly. The possible advantages of a customized distribution may include enabled and optimized hardware acceleration capabilities, supportive helper scripts, fully equipped desktop environments, and much more. However, when deciding to use a customized distribution, there is no official support by LeMaker and you have to contact the publisher in case you encounter bugs, or need help. You can also check the customized Arch Linux image that author have built (http://blog.eldajani.net/banana-pi-arch-linux-customized-distribution/) for the Banana Pi and Banana Pro, including several useful applications. Downloading an operating system for the Banana Pi The following two recipes will explain how to set up the SD card with the desired operating system and how to get the Banana Pi up and running for the first time. This recipe is a predecessor. Besides the device itself, you will need at least a source for energy, which is usually a USB power supply and an SD card to boot your Banana Pi. Also, a network cable and connection is highly recommended to be able to interact with your Banana Pi from another computer via a remote shell using the application. You might also want to actually see something on a display. Then, you will need to connect your Banana Pi via HDMI, composite, or LVDS to an external screen. It is recommended that you use an HDMI Version 1.4 cable since lower versions can possibly cause issues. Besides inputting data using a remote shell, you can directly connect an USB keyboard and mouse to your Banana Pi via the USB ports. After completing the required tasks in the upcoming recipes, you will be able to boot your Banana Pi. Getting ready The following components are required for this recipe: Banana Pi SD card (minimum class 4; class 10 is recommended) USB power supply (5V 2A recommended) A computer with an SD card reader/writer (to write the image to the SD card) Furthermore, you are going to need an Internet connection to download a Linux distribution or Android. A few optional but highly recommended components are: Connection to a display (via HDMI or composite) Network connection via Ethernet USB keyboard and mouse You can acquire these items from various retailers. All items shown in the previous two pictures were bought from an online retailer that is known for originally selling books. However, the Banana Pi and the other products can be acquired from a large number of retailers. It is recommended to get a USB power supply with 2000mA (2A) output. How to do it… To download an operating system for Banana Pi, follow these steps: Download an image of your desired operating system. We are going to download Android and Raspbian from the official LeMaker image files website: http://www.lemaker.org/resources/9-38/image_files.html. The following screenshot shows the LeMaker website where you can download the official images: If you are clicking on one of the mirrors (such as Google Drive, Dropbox, and so on), you will be redirected to the equivalent file-hosting service. From there, you are actually able to download the archive file. Once your archive containing the image is downloaded, you are ready to unpack the downloaded archive, which we will do in the upcoming recipes. Setting up the SD card on Windows This recipe will explain how to set up the SD card using a Windows operating system. How to do it… In the upcoming steps, we will unpack the archive containing the operating system image for the Banana Pi and write the image to the SD card: Open the downloaded archive with 7-Zip. The following screenshot shows the 7-Zip application opening a compressed .tgz archive: Unpack the archive to a directory until you get a file with the file extension .img. If it is .tgz or .tar.gz file, you will need to unpack the archive twice Create a backup of the contents of the SD card as everything on the SD card is going to be erased unrecoverablely. Open SD Formatter (https://www.sdcard.org/downloads/formatter_4/) and check the disk letter (E: in the following screenshot). Choose Option to open the Option Setting window and choose: FORMAT TYPE: FULL (Erase) FORMAT SIZE ADJUSTMENT: ON When everything is configured correctly, check again to see whether you are using the correct disk and click Format to start the formatting process. Writing a Linux distribution image to the SD card on Windows The following steps explain how to write a Linux-based distribution to the SD card on Windows: Format the SD card using SD Formatter, which we covered in the previous section. Open the Win32 Disk Imager (http://sourceforge.net/projects/win32diskimager/). Choose the image file by clicking on the directory button. Check, whether you are going to write to the correct disk and then click on Write. Once the burning process is done, you are ready to insert the freshly prepared SD card containing your Linux operating system into the Banana Pi and boot it up for the first time. Booting up and shutting down the Banana Pi This recipe will explain how to boot up and shut down the Banana Pi. As the Banana Pi is a real computer, these tasks are as equally important as tasks on your desktop computer. The booting process starts the Linux kernel and several important services. The shutting down stops them accordingly and does not power off the Banana Pi until all data is synchronized with the SD card or external components correctly. How to do it… We are going to boot up and shut down the Banana Pi. Booting up Do the following steps to boot up your Banana Pi: Attach the Ethernet cable to your local network. Connect your Banana Pi to a display. Plug in an USB keyboard and mouse. Insert the SD card to your device. Power your Banana Pi by plugging in the USB power cable. The next screenshot shows the desktop of Raspbian after a successful boot: Shutting down Linux To shut down your Linux-based distribution, you either use the shutdown command or do it via the desktop environment (in case of Raspbian, it is called LXDE). For the latter method, these are the steps: Click on the LXDE icon in the lower-left corner. Click on Logout. Click on Shutdown in the upcoming window. To shut down your operating system via the shell, type in the following command: $ sudo shutdown -h now Connecting via SSH on Windows using PuTTY The following recipe shows you how to connect to your Banana Pi remotely using an open source application called PuTTY. Getting ready For this recipe, you will need the following ingredients: A booted up Linux operating system on your Banana Pi connected to your local network The PuTTY application on your Windows PC that is also connected to your local area network How to do it… To connect to your Banana Pi via SSH on Windows, perform the following: Run putty.exe. You will see the PuTTY Configuration dialog. Enter the IP address of the Banana Pi and leave the Port as number 22 as. Click on the Open button. A new terminal will appear, attempting a connection to the Banana Pi. When connecting to the Banana Pi for the first time, you will see a PuTTY security alert. The following screenshot shows the PuTTY Security Alert window: Trust the connection by clicking on Yes. You will be requested to enter the login credentials. Use the default username bananapi and password bananapi. When you are done, you should be welcomed by the shell of your Banana Pi. The following screenshot shows the shell of your Banana Pi accessed via SSH using PuTTY on Windows: To quit your SSH session, execute the command exit or press Ctrl + D. Searching, installing, and removing the software Once you have your decent operating system on the Banana Pi, sooner or later you are going to require a new software. As most software for Linux systems is published as open source, you can obtain the source code and compile it for yourself. One alternative is to use a package manager. A lot of software is precompiled and provided as installable packages by the so-called repositories. In case of Debian-based distributions (for example, Raspbian, Bananian, and Lubuntu), the package manager that uses these repositories is called Advanced Packaging Tool (Apt). The two most important tools for our requirements will be apt-get and apt-cache. In this recipe, we will cover the searching, the installing, and removing of software using the Apt utilities. Getting ready The following ingredients are required for this recipe. A booted Debian-based operating system on your Banana Pi An Internet connection How to do it… We will separate this recipe into searching for, installing and removing of packages. Searching for packages In the upcoming example, we will search for a solitaire game: Connect to your Banana Pi remotely or open a terminal on the desktop. Type the following command into the shell: $ apt-cache search solitaire You will get a list of packages that contain the string solitaire in their package name or description. Each line represents a package and shows the package name and description separated by a dash (-). Now we have obtained a list of solitaire games: The preceding screenshot shows the output after searching for packages containing the string solitaire using the apt-cache command. Installing a package We are going to install a package by using its package name. From the previous received list, we select the package ace-of-penguins. Type the following command into the shell: $ sudo apt-get install ace-of-penguins If asked to type the password for sudo, enter the user's password. If a package requires additional packages (dependencies), you will be asked to confirm the additional packages. In this case, enter Y. After downloading and installing, the desired package is installed: Removing a package When you want to uninstall (remove) a package, you also use the apt-get command: Type the following command into a shell: $ sudo apt-get remove ace-of-penguins If asked to type the password for sudo, enter the user's password. You will be asked to confirm the removal. Enter Y. After this process, the package is removed from your system. You will have uninstalled the package ace-of-penguins. Summary In this article, we discovered the installation of a Linux operating system on the Banana Pi. Furthermore, we connected to the Banana Pi via the SSH protocol using PuTTY. Moreover, we discussed how to install new software using the Advanced Packaging Tool. This article is a combination of parts from the first two chapters of the Banana Pi Cookbook. In the Banana Pi Cookbook, we are diving more into detail and explain the specifics of the Banana Pro, for example, how to connect to the local network via WLAN. If you are using a Linux-based desktop computer, you will also learn how to set up the SD card and connect via SSH to your Banana Pi on your Linux computer. Resources for Article: Further resources on this subject: Color and motion finding [article] Controlling the Movement of a Robot with Legs [article] Develop a Digital Clock [article]
Read more
  • 0
  • 0
  • 21531

article-image-elucidating-game-changing-phenomenon-docker-inspired-containerization-paradigm
Packt
07 Jul 2015
20 min read
Save for later

Elucidating the Game-changing Phenomenon of the Docker-inspired Containerization Paradigm

Packt
07 Jul 2015
20 min read
The containerization paradigm has been there in the IT industry for quite a long time in different forms. However, the streamlined stuffing and packaging mission-critical software applications inside highly optimized and organized containers to be efficiently shipped, deployed and delivered from any environment (virtual machines or bare-metal servers, local or remote, and so on) has got a new lease of life with the surging popularity of the Docker platform. The open source Docker project has resulted in an advanced engine for automating most of the activities associated with Docker image formation and the creation of self-defined and self-sufficient, yet collaborative, application-hosted containers. Further on, the Docker platform has come out with additional modules to simplify container discovery, networking, orchestration, security, registry, and so on. In short, the end-to-end life cycle management of Docker containers is being streamlined sagaciously, signaling a sharp reduction of workloads of application developers and system administrators. In this article, we want to highlight the Docker paradigm and its illustrating and insightful capabilities for bringing forth newer possibilities and fresh opportunities. In this article by Pethuru Raj, Jeeva S. Chelladhurai, Vinod Singh, authors of the book Learning Docker, we will cover the reason behind the Docker platform's popularity. Docker is an open platform for developers and system administrators of distributed applications. Building and managing distributed applications is beset with a variety of challenges, especially in the context of pervasive cloud environments. Docker brings forth a bevy of automation through the widely leveraged abstraction technique in successfully enabling distributed applications. The prime point is that the much-talked simplicity is being easily achieved through the Docker platform. With just a one-line command, you can install any application. For example, popular applications and platforms, such as WordPress, MySQL, Nginx, and Memcache, can all be installed and configured with just one command. Software and platform vendors are systematically containerizing their software packages to be readily found, subscribed, and used by many. Performing changes on applications and putting the updated and upgraded images in the repository is made easier. That's the real and tectonic shift brought in by the Docker-sponsored containerization movement. There are other advantages with containerization as well, and they are explained in the subsequent sections. Explaining the key drivers for Docker containerization We have been fiddling with the virtualization technique and tools for quite a long time now in order to establish the much-demanded software portability. The inhibiting dependency factor between software and hardware needs to be decimated with the leverage of virtualization, a kind of beneficial abstraction, through an additional layer of indirection. The idea is to run any software on any hardware. This is done by creating multiple virtual machines (VMs) out of a single physical server, and each VM has its own operating system (OS). Through this isolation, which is enacted through automated tools and controlled resource sharing, heterogeneous applications are accommodated in a physical machine. With virtualization, IT infrastructures become open; programmable; and remotely monitorable, manageable, and maintainable. Business workloads can be hosted in appropriately sized virtual machines and delivered to the outside world, ensuring broader and higher utilization. On the other side, for high-performance applications, virtual machines across multiple physical machines can be readily identified and rapidly combined to guarantee any kind of high-performance needs. However, the virtualization paradigm has its own drawbacks. Because of the verbosity and bloatedness (every VM carries its own operating system), VM provisioning typically takes some minutes; the performance goes down due to excessive usage of compute resources; and so on. Furthermore, the much-published portability need is not fully met by virtualization. The hypervisor software from different vendors comes in the way of ensuring application portability. The OS and application distribution, version, edition, and patch differences hinder smooth portability. Compute virtualization has flourished, whereas the other closely associated network and storage virtualization concepts are just taking off. Building distributed applications through VM interactions invites and involves some practical difficulties. Let's move over to containerization. All of these barriers easily contribute to the unprecedented success of the containerization idea. A container generally contains an application, and all of the application's libraries, binaries, and other dependencies are stuffed together to be presented as a comprehensive, yet compact, entity for the outside world. Containers are exceptionally lightweight, highly portable, easily and quickly provisionable, and so on. Docker containers achieve native system performance. The greatly articulated DevOps goal gets fully fulfilled through application containers. As best practice, it is recommended that every container host one application or service. The popular Docker containerization platform has come out with an enabling engine to simplify and accelerate life cycle management of containers. There are industry-strength and open automated tools made freely available to facilitate the needs of container networking and orchestration. Thereby, producing and sustaining business-critical distributed applications is becoming easy. Business workloads are methodically containerized to be easily taken to cloud environments, and they are exposed for container crafters and composers to bring forth cloud-based software solutions and services. Precisely speaking, containers are turning out to be the most featured, favored, and fine-tuned runtime environment for IT and business services. Reflecting the containerization journey The concept of containerization for building software containers that bundle and sandbox software applications has been there for years at different levels. Consolidating all constituting and contributing modules together into a single software package is a grand way out for faster and error-free provisioning, deployment, and delivery of software applications. Web application development and operations teams have been extensively handling a variety of open source as well as commercial-grade containers (for instance, servlet containers). These are typically deployment and execution containers or engines that suffer from the ignominious issue of portability. However, due to its unfailing versatility and ingenuity, the idea of containerization has picked up with a renewed verve at a different level altogether, and today it is being positioned as the game-changing OS-level virtualization paradigm in achieving the elusive goal of software portability. There are a few OS-level virtualization technologies on different operating environments (For example on Linux such as OpenVZ and Linux-VServer, FreeBSD jails, AIX workload partitions, and Solaris containers). Containers have the innate potential to transform the way we run and scale applications systematically. They isolate and encapsulate our applications and platforms from the host system. A container is typically an OS within your host OS, and you can install and run applications on it. For all practical purposes, a container behaves like a virtual machine (VM), and especially Linux-centric containers are greatly enabled by various kernel-level features (cgroups and namespaces). Because applications are logically isolated, users can run multiple versions of PHP, Python, Ruby, Apache software, and so on coexisting in a host and cleanly tucked away in their containers. Applications are methodically being containerized using the sandbox aspect (a subtle and smart isolation technique) to eliminate all kinds of constricting dependencies. A typical present-day container contains not only an application but also its dependencies and binaries to make it self-contained to run everywhere (laptop, on-premise systems as well as off-premise systems), without any code tweaking and twisting. Such comprehensive and compact sandboxed applications within containers are being prescribed as the most sought-after way forward to fully achieve the elusive needs of IT agility, portability, extensibility, scalability, maneuverability, and security. In a cloud environment, various workloads encapsulated inside containers can easily be moved across systems—even across cloud centers—and cloned, backed up, and deployed rapidly. Live-in modernization and migration of applications is possible with containerization-backed application portability. The era of adaptive, instant-on, and mission-critical application containers that elegantly embed software applications inside itself is set to flourish in the days to come, with a stream of powerful technological advancements. Describing the Docker Platform However, containers are hugely complicated and not user-friendly. Following the realization of the fact that several complexities were coming in the way of massively producing and fluently using containers, an open source project was initiated, with the goal of deriving a sophisticated and modular platform that consists of an enabling engine for simplifying and streamlining containers' life cycles. In other words, the Docker platform was built to automate crafting, packaging, shipping, deployment, and delivery of any software application embedded in a lightweight, extensible, and self-sufficient container that can run virtually anywhere. Docker is being considered as the most flexible and futuristic containerization technology in realizing highly competent and enterprise-class distributed applications. This is meant to make deft and decisive impacts, as the brewing trend in the IT industry is that instead of large, monolithic applications distributed on a single physical or virtual server, companies are building smaller, self-defined, sustainable, easily manageable, and discrete applications. Docker is emerging as an open solution for developers and system administrators to quickly develop, ship, and run any applications. It lets you quickly assemble applications from disparate and distributed components, and eliminates any kind of friction that could come when shipping code. Docker, through a host of tools, simplifies the isolation of individual processes and makes them self-sustainable by running them in lightweight containers. It essentially isolates the "process" in question from the rest of your system, rather than adding a thick layer of virtualization overhead between the host and the guest. As a result, Docker containers run much faster, and its images can be shared easily for bigger and better application containers. The Docker-inspired container technology takes the concept of declarative resource provisioning a step further to bring in the critical aspects of simplification, industrialization, standardization, and automation. Enlightening the major components of the Docker platform The Docker platform architecture comprises several modules, tools, and technologies in order to bring in the originally envisaged benefits of the blossoming journey of containerization. The Docker idea is growing by leaps and bounds, as the open source community is putting in enormous efforts and contributing in plenty in order to make Docker prominent and dominant in production environments. Docker containers internally use a standardized execution environment called Libcontainer, which is an interface for various Linux kernel isolation features, such as namespaces and cgroups. This architecture allows multiple containers to be run in complete isolation from one another while optimally sharing the same Linux kernel. A Docker client doesn't communicate directly with the running containers. Instead, it communicates with the Docker daemon via the venerable sockets, or through a RESTful API. The daemon communicates directly with the containers running on the host. The Docker client can either be installed local to the daemon or on a different host altogether. The Docker daemon runs as root and is capable of orchestrating all the containers running on the host. A Docker image is the prime build component for any Docker container. It is a read-only template from which one or more container instances can be launched programmatically. Just as virtual machines are based on images, Docker Containers are also based on Docker images. These images are typically tiny compared to VM images, and are conveniently stackable thanks to the property of AUFS, a prominent Docker filesystem. Of course, there are other filesystems being supported by the Docker platform. Docker Images can be exchanged with others and versioned like source code in private or public Docker registries. Registries are used to store images and can be local or remote. When a container gets launched, Docker first searches in the local registry for the launched image. If it is not found locally, then it searches in the public remote registry (DockerHub). If the image is there, Docker downloads it on the local registry and uses it to launch the container. This makes it easy to distribute images efficiently and securely. One of Docker's killer features is the ability to find, download, and start container images created by other developers quickly. In addition to the various base images, which you can use to create your own Docker containers, the public Docker registry features images of ready-to-run software, including databases, middleware solutions, content management systems, development environments, web servers, and so on. While the Docker command-line client searches in the public registry by default, it is also possible to maintain private registries. This is a great option for distributing images with proprietary code or components internally in your company. Pushing images to the registry is just as easy as downloading images from them. Docker Inc.'s registry has a web-based interface for searching, reading, commenting, and recommending (also known as starring) images. A Docker container is a running instance of a Docker image. You can create Docker images from a running container as well. For example, you can launch a container, install a bunch of software packages using a package manager such as APT or yum, and then commit those changes to a new Docker image. But a more powerful and extensible way of creating Docker images is through a Dockerfile, the widely used build-mechanism of Docker images through declarations. The Dockerfile mechanism gets introduced to automate the build process. You can put a Dockerfile under version control and have a perfectly repeatable way of creating images. Expounding the Containerization-enabling Technologies Linux has offered a surfeit of ways to containerize applications, from its own Linux containers (LXC) to infrastructure-based technologies. Due to the tight integration, a few critical drawbacks are frequently associated with Linux containers. Libcontainer is, therefore, the result of collaborative and concerted attempts by many established vendors to standardize the way applications are packed up, delivered, and run in isolation. Libcontainer provides a standard interface to make sandboxes or containers inside an OS. With it, a container can interface in a predictable way with the host OS resources. This enables the application inside it to be controlled as expected. Libcontainer came into being as an important ingredient of the Docker platform, and uses kernel-level namespaces to isolate the container from the host. The user namespace separates the container's and host's user databases. This succinct arrangement ensures that the container's root user does not have root privileges on the host. The process namespace is responsible for displaying and managing processes running only in the container, not the host. And the network namespace provides the container with its own network device and IP address. Control Groups (cgroups) is another important contributor. While namespaces are responsible for isolation of the host from the container, control groups implement resource monitoring, accounting, and auditing. While allowing Docker to limit the resources being consumed by a container, such as memory, disk space, and I/O, cgroups also outputs a lot of useful metrics about these resources. These metrics allow Docker to monitor resource consumption of the various processes within the containers, and make sure that each of them gets only its fair share of the available resources. Let's move on to Docker Filesystem now. In a traditional Linux boot, the kernel first mounts the root filesystem (rootfs) as read-only, checks its integrity, and then switches the whole rootfs volume to read-write mode. When Docker mounts rootfs, it starts as read-only, but instead of changing the filesystem to read-write mode, it takes advantage of a union mount to add a read-write filesystem over the read-only file system. In fact, there may be multiple read-only filesystems stacked on top of each other. Each of these filesystems is considered as a layer. Docker has been using AuFS (short for Advanced union File system) as a filesystem for containers. When a process needs to modify a file, AuFS creates a copy of that file and is capable of merging multiple layers into a single representation of a filesystem. This process is called copy-on-write. The really cool thing is that AuFS allows Docker to use certain images as the basis for containers. That is, only one copy of the image is required to be stored, resulting in saving of storage and memory, as well as faster deployment of containers. AuFS controls the version of container images, and each new version is simply a difference of changes from the previous version, effectively keeping image files to a minimum. Container orchestration is an important ingredient for sustaining the journey of containerization towards the days of smarter containers. Docker has added a few commands to facilitate safe and secure networking of local as well as remote containers. There are orchestration tools and tricks emerging and evolving in order to establish the much-needed dynamic and seamless linkage between containers, and thus produce composite containers that are more aligned and tuned for specific IT as well as business purposes. Nested containers are bound to flourish in the days to come, with continued focus on composition technologies. There are additional composition services being added by the Docker team in order to make container orchestration pervasive. Exposing the distinct benefits of the Docker platform These are the direct benefits of using Docker containers: Enabling Docker containers for micro services: Micro service architecture (MSA) is an architectural concept that typically aims to disentangle a software-intensive solution by decomposing its distinct functionality into a set of discrete services. These services are built around business capabilities, and are independently deployable through automated tools. Each micro service can be deployed without interrupting other micro services. With the monolithic era all set to fade away, micro service architecture is slowly, yet steadily, emerging as a championed way to design, build, and sustain large-scale software systems. It not only solves the problem of loose coupling and software modularity, but is a definite boon for continuous integration and deployment, which are the hallmarks of the agile world. As we all know, introducing changes in one part of an application, which mandates changes across the application, has been a bane to the goal of continuous deployment. MSA needs lightweight mechanisms for small and independently deployable services, scalability, and portability. These requirements can easily be met by smartly using containers. Containers provide an ideal environment for service deployment in terms of speed, isolation management, and life cycle. It is easy to deploy new versions of services inside containers. They are also better suited for micro services than virtual machines because micro services can start up and shut down more quickly. In addition, computing, memory, and other resources can scale independently. Synchronizing well with cloud environments : With the faster maturity of the Docker platform, there is a new paradigm of Containers as a Service (CaaS) emerging and evolving; that is, containers are being readied, hosted, and delivered as a service from the cloud (private, public, or hybrid) over the public Web. All the necessary service enablement procedures are being meticulously enacted on containers to make them ready for the forthcoming service era. In other words, knowledge-filled, service-oriented, cloud-based, composable, and cognitive containers are being proclaimed as one of the principal ingredients for the establishment and sustenance of a vision of a smarter planet. Precisely speaking, applications are containerized and exposed as services to be discovered and used by a variety of consumers for a growing set of use cases. Eliminating the dependency hell: With Docker containers (the lightweight and nimble cousin of VMs), packaging an application along with all of its dependencies compactly and run it smoothly in development, test, and production environments get automated. The Docker engine resolves the problem of the dependency hell. Modern applications are often assembled from existing components and rely on other services and applications. For example, your Python application might use PostgreSQL as a data store, Redis for caching, and Apache as a web server. Each of these components comes with its own set of dependencies, which may conflict with those of the other components. By precisely packaging each component and its dependencies, Docker solves the dependency hell. The brewing and beneficial trend is to decompose applications into decoupled, minimalist, and specialized containers that are designed to do a single thing really well. Bridging Development and Operations Gaps: There are configuration tools and other automation tools, such as Puppet, Chef, Salt, and so on, pioneered for the DevOps movement. However, they are more tuned for operations teams. The Docker idea is the first and foremost in bringing development as well as operations team members together meticulously. Developers can work inside the containers and operations engineers can work outside the containers simultaneously. This is because Docker encapsulates everything for an application to run independently and the prevailing gaps between development and operation are decimated completely. Guaranteeing Consistency for the Agile World: We know that the aspects of continuous integration (CI) and continuous delivery (CD) are very vital for agile production, as they substantially reduce the number of bugs in any software solution getting released to the market. We also know that the traditional CI and CD tools build application slugs by pulling source code repositories. Although these tools work relatively well for many applications, there can be binary dependencies or OS-level variations that make code runs slightly different in production environments than in development and staging environments. Furthermore, CI was built with the assumption that an application was located in single code repository. Now, with the arrival of Docker containers, there is a fresh set of CI tools that can be used to start testing multi-container applications that draw from multiple code repositories. Bringing in collaboration for the best-of-breed containers: Instead of tuning your own service containers for various software packages, Docker encourages the open source community to collaborate and fine-tune containers in Docker Hub, the public Docker repository. Thus, Docker is changing the cloud development practices by allowing anyone to leverage the proven and potential best practices in the form of stitching together other people's containers. It is like having a Lego set of cloud components to stick them together. Accelerating resource provisioning: In the world of the cloud, faster provisioning of IT resources and business applications is very important. Typically, VM provisioning takes 5 or more minutes, whereas Docker containers, because of their lightweight nature, can be provisioned in a few seconds through Docker tools. Similarly, live-in migration of containers inside as well as across cloud centers is being facilitated. A bare-metal (BM) server can accommodate hundreds of Docker containers that are easily being deployed on the cloud. With a series of advancements, containers are expected to be nested. A software component that makes up a tier in one container might be called by another in a remote location. An update of the same tier might be passed on to any other container that uses the same component to ensure utmost consistency. Envisaging application-aware containers - Docker promotes an application-centric container model. That is to say, containers should run individual applications or services rather than a slew of them. Containers are fast and consume lesser amount of resources to create and run applications. Following the single-responsibility principle and running one main process per container results in loose coupling of the components of your system. Empowering IT agility, autonomy, and affordability: Docker containers are bringing down IT costs significantly, simplifying IT operations and making OS patching easier, as there are fewer OSes to manage. This ensures greater application mobility, easier application patching, and better workload visibility and controllability. This in turn guarantees higher resource utilization and faster responses for newer workload requirements, delivering newer kinds of sophisticated services quickly. Summary Docker harnesses some powerful kernel-level technologies smartly and provides a simple tool set and a unified API for managing the namespace, cgroups, and a copy-on-write filesystem. The guys at Docker Inc. has created a compact tool that is greater than the sum of all its parts. The result is a potential game changer for DevOps, system administrators, and developers. Docker provides tools to make creating and working with containers as easy as possible. Containers sandbox processes from each other. Docker does a nice job of harnessing the benefits of containerization for a focused purpose, namely lightweight packaging and deployment of applications. Precisely speaking, virtualization and other associated technologies coexist with containerization to ensure software-defined, self-servicing, and composite clouds. Resources for Article: Further resources on this subject: Network and Data Management for Containers [article] Giving Containers Data and Parameters [article] Unboxing Docker [article]
Read more
  • 0
  • 0
  • 2373

article-image-methodology-modeling-business-processes-soa
Packt
07 Jul 2015
27 min read
Save for later

Methodology for Modeling Business Processes in SOA

Packt
07 Jul 2015
27 min read
This article by Matjaz B. Juric, Sven Bernhardt, Hajo Normann, Danilo Schmiedel, Guido Schmutz, Mark Simpson, and Torsten Winterberg, authors of the book Design Principles for Process-driven Architectures Using Oracle BPM and SOA Suite 12c, describes the strategies and a methodology that can help us realize the benefits of BPM as a successful enterprise modernization strategy. In this article, we will do the following: Provide the reader with a set of actions in the course of a complete methodology that they can incorporate in order to create the desired attractiveness towards broader application throughout the enterprise Describe organizational and cultural barriers to applying enterprise BPM and discuss ways to overcome them (For more resources related to this topic, see here.) The postmature birth of enterprise BPM When enterprise architects discuss the future of the software landscape of their organization, they map the functional capabilities, such as customer relationship management and order management, to existing or new applications—some packaged and some custom. Then, they connect these applications by means of middleware. They typically use the notion of an integration middleware, such as an enterprise service bus (ESB), to depict the technical integration between these applications, exposing functionality as services, APIs, or, more trendy, "micro services". These services are used by modern, more and more mobile frontends and B2B partners. For several years now, it has been hard to find a PowerPoint slide that discusses future enterprise middleware without the notion of a BPM layer that sits on top of the frontend and the SOA service layer. So, in most organizations, we find a slide deck that contains this visual box named BPM, signifying the aim to improve process excellence by automating business processes along the management discipline known as business process management (BPM). Over the years, we have seen that the frontend layer often does materialize as a portal or through a modern mobile application development platform. The envisioned SOA services can be found living on an ESB or API gateway. Yet, the component called BPM and the related practice of modeling executable processes has failed to finally incubate until now—at least in most organizations. BPM still waits for morphing from an abstract item on a PowerPoint slide and in a shelved analyst report to some automated business processes that are actually deployed to a business-critical production machine. When we look closer—yes—there is a license for a BPM tool, and yes, some processes have even been automated, but those tend to be found rather in the less visible corners of the enterprise, seldom being of the concern for higher management and the hot project teams that work day and night on the next visible release. In short, BPM remains the hobby of some enterprise architects and the expensive consultants they pay. Will BPM ever take the often proposed lead role in the middleware architect's tool box? Will it lead to a better, more efficient, and productive organization? To be very honest, at the moment the answer to that question is rather no than yes. There is a good chance that BPM remains just one more of the silver bullets that fill some books and motivate some great presentations at conferences, yet do not have an impact on the average organization. But there is still hope for enterprise BPM as opposed to a departmental approach to process optimization. There is a good chance that BPM, next to other enabling technologies, will indeed be the driver for successful enterprise modernization. Large organizations all over the globe reengage with smaller and larger system integrators to tackle the process challenge. Maybe BPM as a practice needs more time than other items found in Gardner hype curves to mature before it's widely applied. This necessary level of higher maturity encompasses both the tools and the people using them. Ultimately, this question of large-scale BPM adoption will be answered individually in each organization. Only when a substantive set of enterprises experience tangible benefits from BPM will they talk about it, thus growing a momentum that leads to the success of enterprise BPM as a whole. This positive marketing based on actual project and program success will be the primary way to establish a force of attraction towards BPM that will raise curiosity and interest in the minds of the bulk of the organizations that are still rather hesitant or ignorant about using BPM. Oracle BPM Suite 12c – new business architecture features New tools in Oracle BPM Suite 12c put BPM in the mold of business architecture (BA). This new version contains new BA model types and features that help companies to move out of the IT-based, rather technical view of business processes automation and into strategic process improvement. Thus, these new model types help us to embark on the journey towards enterprise BPM. This is an interesting step in evolution of enterprise middleware—Oracle is the first vendor of a business process automation engine that moved up from concrete automated processes to strategic views on end-to-end processes, thus crossing the automation/strategic barrier. BPM Suite 12c introduces cross-departmental business process views. Thereby, it allows us to approach an enterprise modeling exercise through top-down modeling. It has become an end-to-end value chain model that sits on top of processes. It chains separated business processes together into one coherent end-to-end view. The value chain describes a bird's-eye view of the steps needed to achieve the most critical business goals of an organization. These steps comprise of business processes, of which some are automated in a BPMN engine and others actually run in packaged applications or are not automated at all. Also, BPM Suite 12c allows the capturing of the respective goals and provides the tools to measure them as KPIs and service-level agreements. In order to understand the path towards these yet untackled areas, it is important to understand where we stand today with BPM and what this new field of end-to-end process transparency is all about. Yet, before we get there, we will leave enterprise IT for a moment and take a look at the nature of a game (any game) in order to prepare for a deeper understanding of the mechanisms and cultural impact that underpin the move from a departmental to an enterprise approach to BPM. Football games – same basic rules, different methodology Any game, be it a physical sport, such as football, or a mental sport, such as chess, is defined through a set of common rules. How the game is played will look very different depending on the level of the league it is played in. A Champions League football game is so much more refined than your local team playing at the nearby stadium, not to mention the neighboring kids kicking the ball in the dirt ground. These kids will show creativity and pleasure in the game, yet the level of sophistication is a completely different ball game in the big stadium. You can marvel at the effort made to ensure that everybody plays their role in a well-trained symbiosis with their peers, all sharing a common set of collaboration rules and patterns. The team spent so many hours training the right combinations, establishing a deep trust. There is no time to discuss the meaning of an order shouted out by the trainer. They have worked on this common understanding of how to do things in various situations. They have established one language that they share. As an observer of a great match, you appreciate an elaborate art, not of one artist but of a coherent team. No one would argue the prevalence of this continuum in the refinement and rising sophistication in rough physical sports. It is puzzling to know to which extent we, as players in the games of enterprise IT, often tend to neglect the needs and forces that are implied by such a continuum of rising sophistication. The next sections will take a closer look at these BPM playgrounds and motivate you to take the necessary steps toward team excellence when moving from a small, departmental level BPM/SOA project to a program approach that is centered on a BPM and SOA paradigm. Which BPM game do we play? Game Silo BPM is the workflow or business process automation in organizational departments. It resembles the kids playing soccer on the neighborhood playground. After a few years of experience with automated processes, the maturity rises to resemble your local football team—yes, they play in a stadium, and it is often not elegant. Game Silo BPM is a tactical game in which work gets done while management deals with reaching departmental goals. New feature requests lead to changed or new applications and the people involved know each other very well over many years under established leadership. Workflows are automated to optimize performance. Game Enterprise BPM thrives for process excellence at Champions League. It is a strategic game in which higher management and business departments outline the future capability maps and cross-departmental business process models. In this game, players tend to regard the overall organization as a set of more or less efficient functional capabilities. One or a group of functional capabilities make up a department. Game Silo BPM – departmental workflows Today, most organizations use BPM-based process automation based on tools such as Oracle BPM Suite to improve the efficiency of the processes within departments. These processes often support the functional capability that this particular department owns by increasing its efficiency. Increasing efficiency is to do more with less. It is about automating manual steps, removing bottlenecks, and making sure that the resources are optimally allocated across your process. The driving force is typically the team manager, who is measured by the productivity of his team. The key factor to reach this goal is the automation of redundant or unnecessary human/IT interactions. Through process automation, we gain insights into the performance of the process. This insight can be called process transparency, allowing for the constant identification of areas of improvement. A typical example of the processes found in Game Silo BPM is an approval process that can be expressed as a clear path among process participants. Often, these processes work on documents, thus having a tight relationship with content management. We also find complex backend integration processes that involve human interaction only in the case of an exception. The following figure depicts this siloed approach to process automation. Recognizing a given business unit as a silo indicates its closed and self-contained world. A word of caution is needed: The term "silo" is often used in a negative connotation. It is important, though, to recognize that a closed and coherent team with no dependencies on other teams is a preferred working environment for many employees and managers. In more general terms, it reflects an archaic type of organization that we lean towards. It allows everybody in the team to indulge in what can be called the siege mentality we as humans gravitate to some coziness along the notion of a well-defined and manageable island of order. Figure 1: Workflow automation in departmental silos As discussed in the introduction, there is a chance that BPM never leaves this departmental context in many organizations. If you are curious to find out where you stand today, it is easy to depict whether a business process is stuck in the silo or whether it's part of an enterprise BPM strategy. Just ask whether the process is aligned with corporate goals or whether it's solely associated with departmental goals and KPIs. Another sign is the lack of a methodology and of a link to cross-departmental governance that defines the mode of operation and the means to create consistent models that speak one common language. To impose the enterprise architecture tools of Game Enterprise BPM on Game Silo BPM would be an overhead; you don't need them if your organization does not strive for cross-departmental process transparency. Oracle BPM Suite 11g is made for playing Game Silo BPM Oracle BPM Suite 11g provides all the tools and functionalities necessary to automate a departmental workflow. It is not sufficient to model business processes that span departmental barriers and require top-down process hierarchies. The key components are the following: The BPMN process modeling tool and the respective execution engine The means to organize logical roles and depict them as swimlanes in BPMN process models The human-task component that involves human input in decision making The business rule for supporting automated decision making The technical means to call backend SOA services The wizards to create data mappings The process performance can be measured by means of business activity monitoring (BAM) Oracle BPM Suite models processes in BPMN Workflows are modeled on a pretty fine level of granularity using the standard BPMN 2.0 version (and later versions). BPMN is both business ready and technically detailed enough to allow model processes to be executed in a process engine. Oracle fittingly expresses the mechanism as what you see is what you execute. Those workflows typically orchestrate human interaction through human tasks and functionality through SOA services. In the next sections of this article, we will move our head out of the cocoon of the silo, looking higher and higher along the hierarchy of the enterprise until we reach the world of workshops and polished PowerPoint slides in which the strategy of the organization is defined. Game Enterprise BPM Enterprise BPM is a management discipline with a methodology typically found in business architecture (BA) and the respective enterprise architecture (EA) teams. Representatives of higher management and of business departments and EA teams meet management consultants in order to understand the current situation and the desired state of the overall organization. Therefore, enterprise architects define process maps—a high-level view of the organization's business processes, both for the AS-IS and various TO-BE states. In the next step, they define the desired future state and depict strategies and means to reach it. Business processes that generate value for the organization typically span several departments. The steps in these end-to-end processes can be mapped to the functional building blocks—the departments. Figure 2: Cross-departmental business process needs an owner The goal of Game Enterprise BPM is to manage enterprise business processes, making sure they realize the corporate strategy and meet the respective goals, which are ideally measured by key performance indicators, such as customer satisfaction, reduction of failure, and cost reduction. It is a good practice to reflect on the cross-departmental efforts through the notion of a common or shared language that is spoken across departmental boundaries. A governance board is a great means to reach this capability in the overall organization. Still wide open – the business/IT divide Organizational change in the management structure is a prerequisite for the success of Game Enterprise BPM but is not a sufficient condition. Several business process management books describe the main challenge in enterprise BPM as the still-wide-open business/IT divide. There is still a gap between process understanding and ownership in Game Enterprise BPM and how automated process are modeled and perceived in departmental workflows of Game Silo BPM. Principles, goals, standards, and best practices defined in Game Enterprise BPM do not trickle down into everyday work in Game Silo BPM. One of the biggest reasons for this divide is the fact that there is no direct link between the models and tools used in top management to depict the business strategy, IT strategy, and business architecture and the high-level value chain and between the process models and the models and artifacts used in enterprise architecture and from there, even software architecture. Figure 3: Gap between business architecture and IT enterprise architecture in strategic BPM So, traditionally, organizations use specific business architecture or enterprise architecture tools in order to depict AS-IS and TO-BE high-level reflections of value chains, hierarchical business processes, and capability maps alongside application heat maps. These models kind of hang in the air, they are not deeply grounded in real life. Business process models expressed in event process chains (EPCs), vision process models and other modeling types often don't really reflect the flows and collaboration structures of actual procedures of the office. This leads to the perception of business architecture departments as ivory towers with no, or weak, links to the realities of the organization. On the other side, the tools and models of the IT enterprise architecture and software architecture speak a language not understood by the members of business departments. Unified Modeling Language (UML) is the most prominent set of model types that stuck in IT. However, while the UML class and activity diagrams promised to be valid to depict the nouns and stories of the business processes, their potential to allow a shared language and approach to depict joint vocabulary and views on requirements rarely materialized. Until now, there has been no workflow tool vendor approaching these strategic enterprise-level models, bridging the business/IT gap. Oracle BPM Suite 12c tackles Game Enterprise BPM With BPM Suite 12c, Oracle is starting to engage in this domain. The approach Oracle took can be summarized as applying the Pareto principle: 80 percent of the needed features for strategic enterprise modeling can be found in just 20 percent of the functionality of those high-end, enterprise-level modeling tools. So, Oracle implemented these 20 percent of business architecture models: Enterprise maps to define the organizational and application context Value chains to establish a root for process hierarchies The strategy model to depict focus areas and assign technical capabilities and optimization strategies The following figure represents the new features in Oracle BPM Suite 12c in the context of the Game Enterprise BPM methodology: Figure 4: Oracle BPM Suite 12c new features in the context of the Game Enterprise BPM methodology The preceding figure is based on the BPTrends Process Change Methodology introduced in the Business Process Change book by Paul Harmon. These new features promise to create a link from higher-level process models and other strategic depictions into executable processes. This link could not be established until now since there are too many interface mismatches between enterprise tooling and workflow automation engines. The new model types in Oracle BPM Suite 12c are, as discussed, a subset of all the features and model types in the EA tools. For this subset, these interface mismatches have been made obsolete: there is a clear trace with no tool disruption from the enterprise map to the value chain and associated KPIs down to the BPMN process that are automated. These features have been there in the EA and BPM tools before. What is new is this undisrupted trace. Figure 5: Undisrupted trace from the business architecture to executable processes The preceding figure is based on the BPTrends Process Change Methodology introduced in the Business Process Change book by Paul Harmon. This holistic set of tools that brings together aspects from modeling time, design time, and runtime makes it more likely to succeed in finally bridging the business/IT gap. Figure 6: Tighter links from business and strategy to executable software and processes Today, we do not live in a perfect world. To understand to which extent this gap is closed, it helps to look at how people work. If there is a tool to depict enterprise strategy, end-to-end business processes, business capabilities, and KPIs that are used in daily work and that have the means to navigate to lower-level models, then we have come quite far. The features of Oracle BPM Suite 12c, which are discussed below, are a step in this direction but are not the end of the journey. Using business architect features The process composer is a business user-friendly web application. From the login page, it guides the user in the spirit of a business architecture methodology. All the models that we create are part of a "space". On its entry page, the main structure is divided into business architecture models in BA projects, which are described in this article. It is a good practice to start with an enterprise map that depicts the business capabilities of our organization. The rationale is that the functions the organization is made of tend to be more stable and less a matter of interpretation and perspective than any business process view. Enterprise maps can be used to put those value chains and process models into the organizational and application landscape context. Oracle suggests organizing the business capabilities into three segments. Thus, the default enterprise map model is prepopulated through three default lanes: core, management, and support. In many organizations, this structure is feasible as it is up to you to either use them or create your own lanes. Then we can define within each lane the key business capabilities that make up the core business processes. Figure 7: Enterprise map of RYLC depicting key business capabilities Properties of BA models Each element (goal, objective, strategy) within the model can be enriched with business properties, such as actual cost, actual time, proposed cost and proposed time. These properties are part of the impact analysis report that can be generated to analyze the BA project. Figure 8: Use properties to specify SLAs and other BA characteristics Depicting organizational units Within RYLC as an organization, we now depict its departments as organizational units. We can adorn goals to each of the units, which depict its function and the role it plays in the concert of the overall ecosystem. This association of a unit to a goal is expressed via links to goals defined in the strategy model. These goals will be used for the impact analysis reports that show the impact of changes on the organizational or procedural changes. It is possible to create several organization units as shown in the following screenshot: Figure 9: Define a set of organization units Value chains A value chain model forms the root of a process hierarchy. A value chain consists of one direct line of steps, no gateways, and no exceptions. The modeler in Oracle BPM Suite allows each step in the chain to depict associated business goals and key performance indicators (KPIs) that can be used to measure the organization's performance rather than the details of departmental performance. The value chain model is a very simple one depicting the flow of the most basic business process steps. Each step is a business process in its own right. On the level of the chain, there is no decision making expressed. This resembles a business process expressed in BPMN that has only direct lines and no gateways. Figure 10: Creation of a new value chain model called "Rental Request-to-Delivery" The value chain model type allows the structuring of your business processes into a hierarchy with a value chain forming the topmost level. Strategy models Strategy models that can be used to further motivate their KPIs are depicted at the value chain level. Figure 11: Building the strategy model for the strategy "Become a market leader" These visual maps leverage existing process documentation and match it with current business trends and existing reference models. These models depict processes and associated organizational aspects encompassing cross-departmental views on higher levels and concrete processes down to level 3. From there, they prioritize distinct processes and decide on one of several modernization strategies—process automation being just one of several! So, in the proposed methodology in the following diagram, in the Define strategy and performance measures (KPIs), the team down-selects for each business process or even subprocess one or several means to improve process efficiency or transparency. Typically, these means are defined as "supporting capabilities". These are a technique, a tool, or an approach that helps to modernize a business process. A few typical supporting capabilities are mentioned here: Explicit process automation Implicit process handling and automation inside a packaged application, such as SAP ERP or Oracle Siebel Refactored COTS existing application Business process outsourcing Business process retirement The way toward establishing these supporting capabilities is defined through a list of potential modernization strategies. Several of the modernization strategies relate to the existing applications that support a business process, such as refactoring, replacement, retirement, or re-interfacing of the respective supporting applications. The application modernization strategy that we are most interested in this article is establish explicit automated process. It is a best practice to create a high-level process map and define for each of the processes whether to leave it as it is or to depict one of several modernization strategies. When several modernization strategies are found for one process, we can drill down into the process through a hierarchy and stop at the level on which there is a disjunct modernization strategy. Figure 12: Business process automation as just one of several process optimization strategies The preceding figure is based on the BPTrends Process Change Methodology introduced in the Business Process Change book by Paul Harmon. Again, just one of these technical capabilities in strategic BPM is the topic of this article: process automation. It is not feasible to suggest automating all the business processes of any organization. Key performance indicators Within a BA project (strategy and value chain models), there are three different types of KPIs that can be defined: Manual KPI: This allows us to enter a known value Rollup KPI: This evaluates an aggregate of the child KPIs External KPI: This provides a way to include KPI data from applications other than BPM Suite, such as SAP, E-Business Suite, PeopleSoft, and so on. Additionally, KPIs can be defined on a BPMN process level, which is not covered in this article. KPIs in the value chain step level The following are the steps to configure the KPIs in the value chain step level: Open the value chain model Rental Request-to-Delivery. Right-click on the Vehicle Reservation & Allocation chain step, and select KPI. Click on the + (plus) sign to create a manual KPI, as illustrated in the next screenshot. The following image shows the configuration of the KPIs: Figure 13: Configuring a KPI Why we need a new methodology for Game Enterprise BPM Now, Game Enterprise BPM needs to be played everywhere. This implies that Game Silo BPM needs to diminish, meaning it needs to be replaced, gradually, through managed evolution, league by league, aiming for excellence at Champions League. We can't play Game Enterprise BPM with the same culture of ad hoc, joyful creativity, which we find in Game Silo BPM. We can't just approach our colleague; let's call him Ingo Maier, who we know has drawn the process model for a process we are interested in. We can't just walk over to the other desk to him, asking him about the meaning of an unclear section in the process. That is because in Game Enterprise BPM, Ingo Maier, as a person whom we know as part of our team Silo, does not exist anymore. We deal with process models, with SOA services, with a language defined somewhere else, in another department. This is what makes it so hard to move up in BPM leagues. Hiding behind the buzz term "agile" does not help. In order to raise BPM maturity up, when we move from Game Silo BPM to Game Enterprise BPM, the organization needs to establish a set of standards, guidelines, tools, and modes of operations that allow playing and succeeding at Champions League. Additionally, we have to define the modes of operations and the broad steps that lead to a desired state. This formalization of collaboration in teams should be described, agreed on, and lived as our BPM methodology. The methodology thrives for a team in which each player contributes to one coherent game along well-defined phases. Political change through Game Enterprise BPM The political problem with a cross-departmental process view becomes apparent if we look at the way organizations distribute political power in Game Silo BPM. The heads of departments form the most potent management layer while the end-to-end business process has no potent stakeholder. Thus, it is critical for any Game Enterprise BPM to establish a good balance of de facto power with process owners acting as stakeholders for a cross-departmental process view. This fills the void in Game Silo BPM of end-to-end process owners. With Game Enterprise BPM, the focus shifts from departmental improvements to the KPIs and improvement of the core business processes. Pair modeling the value chains and business processes Value chains and process models down to a still very high-level layer, such as a layer 3, can be modeled by process experts without involving technically skilled people. They should omit all technical details. To provide the foundation for automated processes, we need to add more details about domain knowledge and some technical details. Therefore, these domain process experts meet with BPM tool experts to jointly define the next level of detail in BPMN. In an analogy to the practice of pair development in agile methodologies, you could call this kind of collaboration pair modeling. Ideally, the process expert(s) and the tool expert look at the same screen and discuss how to improve the flow of the process model, while the visual representation evolves into variances, exceptions, and better understanding of the involved business objects. For many organizations that are used to a waterfall process, this is a fundamentally new way of requirement gathering that might be a challenge for some. The practice is an analogy of the customer on site practice in agile methodologies. This new way of close collaboration for process modeling is crucial for the success of BPM projects since it allows us to establish a deep and shared understanding in a very pragmatic and productive way. Figure 14: Roles and successful modes of collaboration When the process is modeled in sufficient detail to clearly depict an algorithmic definition of the flow of the process and all its variances, the model can be handed over to BPM developers. They add all the technical bells and whistles, such as data mapping, decision rules, service calls, and exception handling. Portal developers will work on their implementation of the use cases. SOA developers will use Oracle SOA Suite to integrate with backend systems, therefore implementing SOA services. The discussed notion of a handover from higher-level business process models to development teams can also be used to depict the line at which it might make sense to outsource parts of the overall development. Summary In this article, we saw how BPM as an approach to model, automate, and optimize business process is typically applied rather on a departmental level. We saw how BPM Suite 12c introduced new features that allow us to cross the bridge towards the top-down, cross-departmental, enterprise-level BPM. We depicted the key characteristics of the enterprise BPM methodology, which aligns corporate or strategic activities with actual process automation projects. We learned the importance of modeling standards and guidelines, which should be used to gain business process insight and understanding on broad levels throughout the enterprise. The goal is to establish a shared language to talk about the capabilities and processes of the overall organization and the services it provides to its customers. The role of data in SOA that will support business processes was understood with a critical success factor being the definition of the business data model that, along with services, will form the connection between the process layer, user interface layer, and the services layer. We understood how important it is to separate application logic from service logic and process logic to ensure the benefits of a process-driven architecture are realized. Resources for Article: Further resources on this subject: Introduction to Oracle BPM [article] Oracle B2B Overview [article] Event-driven BPEL Process [article]
Read more
  • 0
  • 0
  • 5240

article-image-virtualization
Packt
07 Jul 2015
37 min read
Save for later

Learning Embedded Linux Using Yocto: Virtualization

Packt
07 Jul 2015
37 min read
In this article by Alexandru Vaduva, author of the book Learning Embedded Linux Using the Yocto Project, you will be presented with information about various concepts that appeared in the Linux virtualization article. As some of you might know, this subject is quite vast and selecting only a few components to be explained is also a challenge. I hope my decision would please most of you interested in this area. The information available in this article might not fit everyone's need. For this purpose, I have attached multiple links for more detailed descriptions and documentation. As always, I encourage you to start reading and finding out more, if necessary. I am aware that I cannot put all the necessary information in only a few words. In any Linux environment today, Linux virtualization is not a new thing. It has been available for more than ten years and has advanced in a really quick and interesting manner. The question now does not revolve around virtualization as a solution for me, but more about what virtualization solutions to deploy and what to virtualize. (For more resources related to this topic, see here.) Linux virtualization The first benefit everyone sees when looking at virtualization is the increase in server utilization and the decrease in energy costs. Using virtualization, the workloads available on a server are maximized, which is very different from scenarios where hardware uses only a fraction of the computing power. It can reduce the complexity of interaction with various environments and it also offers an easier-to-use management system. Today, working with a large number of virtual machines is not as complicated as interaction with a few of them because of the scalability most tools offer. Also, the time of deployment has really decreased. In a matter of minutes, you can deconfigure and deploy an operating system template or create a virtual environment for a virtual appliance deploy. One other benefit virtualization brings is flexibility. When a workload is just too big for allocated resources, it can be easily duplicated or moved on another environment that suit its needs better on the same hardware or on a more potent server. For a cloud-based solution regarding this problem, the sky is the limit here. The limit may be imposed by the cloud type on the basis of whether there are tools available for a host operating system. Over time, Linux was able to provide a number of great choices for every need and organization. Whether your task involves server consolidation in an enterprise data centre, or improving a small nonprofit infrastructure, Linux should have a virtualization platform for your needs. You simply need to figure out where and which project you should chose. Virtualization is extensive, mainly because it contains a broad range of technologies, and also since large portions of the terms are not well defined. In this article, you will be presented with only components related to the Yocto Project and also to a new initiative that I personally am interested in. This initiative tries to make Network Function Virtualization (NFV) and Software Defined Networks (SDN) a reality and is called Open Platform for NFV (OPNFV). It will be explained here briefly. SDN and NFV I have decided to start with this topic because I believe it is really important that all the research done in this area is starting to get traction with a number of open source initiatives from all sorts of areas and industries. Those two concepts are not new. They have been around for 20 years since they were first described, but the last few years have made possible it for them to resurface as real and very possible implementations. The focus of this article will be on the NFV article since it has received the most amount of attention, and also contains various implementation proposals. NFV NFV is a network architecture concept used to virtualize entire categories of network node functions into blocks that can be interconnected to create communication services. It is different from known virtualization techniques. It uses Virtual Network Functions (VNF) that can be contained in one or more virtual machines, which execute different processes and software components available on servers, switches, or even a cloud infrastructure. A couple of examples include virtualized load balancers, intrusion detected devices, firewalls, and so on. The development product cycles in the telecommunication industry were very rigorous and long due to the fact that the various standards and protocols took a long time until adherence and quality meetings. This made it possible for fast moving organizations to become competitors and made them change their approach. In 2013, an industry specification group published a white paper on software-defined networks and OpenFlow. The group was part of European Telecommunications Standards Institute (ETSI) and was called Network Functions Virtualisation. After this white paper was published, more in-depth research papers were published, explaining things ranging from terminology definitions to various use cases with references to vendors that could consider using NFV implementations. ETSI NFV The ETSI NFV workgroup has appeared useful for the telecommunication industry to create more agile cycles of development and also make it able to respond in time to any demands from dynamic and fast changing environments. SDN and NFV are two complementary concepts that are key enabling technologies in this regard and also contain the main ingredients of the technology that are developed by both telecom and IT industries. The NFV framework consist of six components: NFV Infrastructure (NFVI): It is required to offer support to a variety of use cases and applications. It comprises of the totality of software and hardware components that create the environment for which VNF is deployed. It is a multitenant infrastructure that is responsible for the leveraging of multiple standard virtualization technologies use cases at the same time. It is described in the following NFV Industry Specification Groups (NFV ISG) documents: NFV Infrastructure Overview NFV Compute NFV Hypervisor Domain NFV Infrastructure Network Domain The following image presents a visual graph of various use cases and fields of application for the NFV Infrastructure NFV Management and Orchestration (MANO): It is the component responsible for the decoupling of the compute, networking, and storing components from the software implementation with the help of a virtualization layer. It requires the management of new elements and the orchestration of new dependencies between them, which require certain standards of interoperability and a certain mapping. NFV Software Architecture: It is related to the virtualization of the already implemented network functions, such as proprietary hardware appliances. It implies the understanding and transition from a hardware implementation into a software one. The transition is based on various defined patterns that can be used in a process. NFV Reliability and Availability: These are real challenges and the work involved in these components started from the definition of various problems, use cases, requirements, and principles, and it has proposed itself to offer the same level of availability as legacy systems. It relates to the reliability component and the documentation only sets the stage for future work. It only identifies various problems and indicates the best practices used in designing resilient NFV systems. NFV Performance and Portability: The purpose of NFV, in general, is to transform the way it works with networks of future. For this purpose, it needs to prove itself as wordy solution for industry standards. This article explains how to apply the best practices related to performance and portability in a general VNF deployment. NFV Security: Since it is a large component of the industry, it is concerned about and also dependent on the security of networking and cloud computing, which makes it critical for NFV to assure security. The Security Expert Group focuses on those concerns. An architectural of these components is presented here: After all the documentation is in place, a number of proof of concepts need to be executed in order to test the limitation of these components and accordingly adjust the theoretical components. They have also appeared to encourage the development of the NFV ecosystem. SDN Software-Defined Networking (SDN) is an approach to networking that offers the possibility to manage various services using the abstraction of available functionalities to administrators. This is realized by decoupling the system into a control plane and data plane and making decisions based on the network traffic that is sent; this represents the control plane realm, and where the traffic is forwarded is represented by the data plane. Of course, some method of communication between the control and data plane is required, so the OpenFlow mechanism entered into the equation at first; however other components could as well take its place. The intention of SDN was to offer an architecture that was manageable, cost-effective, adaptable, and dynamic, as well as suitable for the dynamic and high-bandwidth scenarios that are available today. The OpenFlow component was the foundation of the SDN solution. The SDN architecture permitted the following: Direct programming: The control plane is directly programmable because it is completely decoupled by the data plane. Programmatically configuration: SDN permitted management, configuration, and optimization of resources though programs. These programs could also be written by anyone because they were not dependent on any proprietary components. Agility: The abstraction between two components permitted the adjustment of network flows according to the needs of a developer. Central management: Logical components could be centered on the control plane, which offered a viewpoint of a network to other applications, engines, and so on. Opens standards and vendor neutrality: It is implemented using open standards that have simplified the SDN design and operations because of the number of instructions provided to controllers. This is smaller compared to other scenarios in which multiple vendor-specific protocols and devices should be handled. Also, meeting market requirements with traditional solutions would have been impossible, taking into account newly emerging markets of mobile device communication, Internet of Things (IoT), Machine to Machine (M2M), Industry 4.0, and others, all require networking support. Taking into consideration the available budgets for further development in various IT departments, were all faced to make a decision. It seems that the mobile device communication market all decided to move toward open source in the hope that this investment would prove its real capabilities, and would also lead to a brighter future. OPNFV The Open Platform for the NFV Project tries to offer an open source reference platform that is carrier-graded and tightly integrated in order to facilitate industry peers to help improve and move the NFV concept forward. Its purpose is to offer consistency, interoperability, and performance among numerous blocks and projects that already exist. This platform will also try to work closely with a variety of open source projects and continuously help with integration, and at the same time, fill development gaps left by any of them. This project is expected to lead to an increase in performance, reliability, serviceability, availability, and power efficiency, but at the same time, also deliver an extensive platform for instrumentation. It will start with the development of an NFV infrastructure and a virtualized infrastructure management system where it will combine a number of already available projects. Its reference system architecture is represented by the x86 architecture. The project's initial focus point and proposed implementation can be consulted in the following image. From this image, it can be easily seen that the project, although very young since it was started in November 2014, has had an accelerated start and already has a few implementation propositions. There are already a number of large companies and organizations that have started working on their specific demos. OPNFV has not waited for them to finish and is already discussing a number of proposed project and initiatives. These are intended both to meet the needs of their members as well as assure them of the reliability various components, such as continuous integration, fault management, test-bed infrastructure, and others. The following figure describes the structure of OPNFV: The project has been leveraging as many open source projects as possible. All the adaptations made to these project can be done in two places. Firstly, they can be made inside the project, if it does not require substantial functionality changes that could cause divergence from its purpose and roadmap. The second option complements the first and is necessary for changes that do not fall in the first category; they should be included somewhere in the OPNFV project's codebase. None of the changes that have been made should be up streamed without proper testing within the development cycle of OPNFV. Another important element that needs to be mentioned is that OPNFV does not use any specific or additional hardware. It only uses available hardware resources as long the VI-Ha reference point is supported. In the preceding image, it can be seen that this is already done by having providers, such as Intel for the computing hardware, NetApp for storage hardware, and Mellanox for network hardware components. The OPNFV board and technical steering committee have a quite large palette of open source projects. They vary from Infrastructure as a Service (IaaS) and hypervisor to the SDN controller and the list continues. This only offers the possibility for a large number of contributors to try some of the skills that maybe did not have the time to work on, or wanted to learn but did not have the opportunity to. Also, a more diversified community offers a broader view of the same subject. There are a large variety of appliances for the OPNFV project. The virtual network functions are diverse for mobile deployments where mobile gateways (such as Serving Gateway (SGW), Packet Data Network Gateway (PGW), and so on) and related functions (Mobility Management Entity (MME) and gateways), firewalls or application-level gateways and filters (web and e-mail traffic filters) are used to test diagnostic equipment (Service-Level Agreement (SLA) monitoring). These VNF deployments need to be easy to operate, scale, and evolve independently from the type of VNF that is deployed. OPNFV sets out to create a platform that has to support a set of qualities and use-cases as follows: A common mechanism is needed for the life-cycle management of VNFs, which include deployment, instantiation, configuration, start and stop, upgrade/downgrade, and final decommissioning A consistent mechanism is used to specify and interconnect VNFs, VNFCs, and PNFs; these are indepedant of the physical network infrastructure, network overlays, and so on, that is, a virtual link A common mechanism is used to dynamically instantiate new VNF instances or decommission sufficient ones to meet the current performance, scale, and network bandwidth needs A mechanism is used to detect faults and failure in the NFVI, VIM, and other components of an infrastructure as well as recover from these failures A mechanism is used to source/sink traffic from/to a physical network function to/from a virtual network function NFVI as a Service is used to host different VNF instances from different vendors on the same infrastructure There are some notable and easy-to-grasp use case examples that should be mentioned here. They are organized into four categories. Let's start with the first category: the Residential/Access category. It can be used to virtualize the home environment but it also provides fixed access to NFV. The next one is data center: it has the virtualization of CDN and provides use cases that deal with it. The mobile category consists of the virtualization of mobile core networks and IMS as well as the virtualization of mobile base stations. Lastly, there are cloud categories that include NFVIaaS, VNFaaS, the VNF forwarding graph (Service Chains), and the use cases of VNPaaS. More information about this project and various implementation components is available at https://www.opnfv.org/. For the definitions of missing terminologies, please consult http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.02.01_60/gs_NFV003v010201p.pdf. Virtualization support for the Yocto Project The meta-virtualization layer tries to create a long and medium term production-ready layer specifically for an embedded virtualization. This roles that this has are: Simplifying the way collaborative benchmarking and researching is done with tools, such as KVM/LxC virtualization, combined with advance core isolation and other techniques Integrating and contributing with projects, such as OpenFlow, OpenvSwitch, LxC, dmtcp, CRIU and others, which can be used with other components, such as OpenStack or Carrier Graded Linux. To summarize this in one sentence, this layer tries to provide support while constructing OpenEmbedded and Yocto Project-based virtualized solutions. The packages that are available in this layer, which I will briefly talk about are as follows: CRIU Docker LXC Irqbalance Libvirt Xen Open vSwitch This layer can be used in conjunction with the meta-cloud-services layer that offer cloud agents and API support for various cloud-based solutions. In this article, I am referring to both these layers because I think it is fit to present these two components together. Inside the meta-cloud-services layer, there are also a couple of packages that will be discussed and briefly presented, as follows: openLDAP SPICE Qpid RabbitMQ Tempest Cyrus-SASL Puppet oVirt OpenStack Having mentioned these components, I will now move on with the explanation of each of these tools. Let's start with the content of the meta-virtualization layer, more exactly with CRIU package, a project that implements Checkpoint/Restore In Userspace for Linux. It can be used to freeze an already running application and checkpoint it to a hard drive as a collection of files. These checkpoints can be used to restore and execute the application from that point. It can be used as part of a number of use cases, as follows: Live migration of containers: It is the primary use case for a project. The container is check pointed and the resulting image is moved into another box and restored there, making the whole experience almost unnoticeable by the user. Upgrading seamless kernels: The kernel replacement activity can be done without stopping activities. It can be check pointed, replaced by calling kexec, and all the services can be restored afterwards. Speeding up slow boot services: It is a service that has a slow boot procedure, can be check pointed after the first start up is finished, and for consecutive starts, can be restored from that point. Load balancing of networks: It is a part of the TCP_REPAIR socket option and switches the socket in a special state. The socket is actually put into the state expected from it at the end of the operation. For example, if connect() is called, the socket will be put in an ESTABLISHED state as requested without checking for acknowledgment of communication from the other end, so offloading could be at the application level. Desktop environment suspend/resume: It is based on the fact that the suspend/restore action for a screen session or an X application is by far faster than the close/open operation. High performance and computing issues: It can be used for both load balancing of tasks over a cluster and the saving of cluster node states in case a crash occurs. Having a number of snapshots for application doesn't hurt anybody. Duplication of processes: It is similar to the remote fork() operation. Snapshots for applications: A series of application states can be saved and reversed back if necessary. It can be used both as a redo for the desired state of an application as well as for debugging purposes. Save ability in applications that do not have this option: An example of such an application could be games in which after reaching a certain level, the establishment of a checkpoint is the thing you need. Migrate a forgotten application onto the screen: If you have forgotten to include an application onto the screen and you are already there, CRIU can help with the migration process. Debugging of applications that have hung: For services that are stuck because of git and need a quick restart, a copy of the services can be used to restore. A dump process can also be used and through debugging, the cause of the problem can be found. Application behavior analysis on a different machine: For those applications that could behave differently from one machine to another, a snapshot of the application in question can be used and transferred into the other. Here, the debugging process can also be an option. Dry running updates: Before a system or kernel update on a system is done, its services and critical applications could be duplicated onto a virtual machine and after the system update and all the test cases pass, the real update can be done. Fault-tolerant systems: It can be used successfully for process duplication on other machines. The next element is irqbalance, a distributed hardware interrupt system that is available across multiple processors and multiprocessor systems. It is, in fact, a daemon used to balance interrupts across multiple CPUs, and its purpose is to offer better performances as well as better IO operation balance on SMP systems. It has alternatives, such as smp_affinity, which could achieve maximum performance in theory, but lacks the same flexibility that irqbalance provides. The libvirt toolkit can be used to connect with the virtualization capabilities available in the recent Linux kernel versions that have been licensed under the GNU Lesser General Public License. It offers support for a large number of packages, as follows: KVM/QEMU Linux supervisor Xen supervisor LXC Linux container system OpenVZ Linux container system Open Mode Linux a paravirtualized kernel Hypervisors that include VirtualBox, VMware ESX, GSX, Workstation and player, IBM PowerVM, Microsoft Hyper-V, Parallels, and Bhyve Besides these packages, it also offers support for storage on a large variety of filesystems, such as IDE, SCSI or USB disks, FiberChannel, LVM, and iSCSI or NFS, as well as support for virtual networks. It is the building block for other higher-level applications and tools that focus on the virtualization of a node and it does this in a secure way. It also offers the possibility of a remote connection. For more information about libvirt, take a look at its project goals and terminologies at http://libvirt.org/goals.html. The next is Open vSwitch, a production-quality implementation of a multilayer virtual switch. This software component is licensed under Apache 2.0 and is designed to enable massive network automations through various programmatic extensions. The Open vSwitch package, also abbreviated as OVS, provides a two stack layer for hardware virtualizations and also supports a large number of the standards and protocols available in a computer network, such as sFlow, NetFlow, SPAN, CLI, RSPAN, 802.1ag, LACP, and so on. Xen is a hypervisor with a microkernel design that provides services offering multiple computer operating systems to be executed on the same architecture. It was first developed at the Cambridge University in 2003, and was developed under GNU General Public License version 2. This piece of software runs on a more privileged state and is available for ARM, IA-32, and x86-64 instruction sets. A hypervisor is a piece of software that is concerned with the CPU scheduling and memory management of various domains. It does this from the domain 0 (dom0), which controls all the other unprivileged domains called domU; Xen boots from a bootloader and usually loads into the dom0 host domain, a paravirtualized operating system. A brief look at the Xen project architecture is available here: Linux Containers (LXC) is the next element available in the meta-virtualization layer. It is a well-known set of tools and libraries that offer virtualization at the operating system level by offering isolated containers on a Linux control host machine. It combines the functionalities of kernel control groups (cgroups) with the support for isolated namespaces to provide an isolated environment. It has received a fair amount of attention mostly due to Docker, which will be briefly mentioned a bit later. Also, it is considered a lightweight alternative to full machine virtualization. Both of these options, containers and machine virtualization, have a fair amount of advantages and disadvantages. If the first option, containers offer low overheads by sharing certain components, and it may turn out that it does not have a good isolation. Machine virtualization is exactly the opposite of this and offers a great solution to isolation at the cost of a bigger overhead. These two solutions could also be seen as complementary, but this is only my personal view of the two. In reality, each of them has its particular set of advantages and disadvantages that could sometimes be uncomplementary as well. More information about Linux containers is available at https://linuxcontainers.org/. The last component of the meta-virtualization layer that will be discussed is Docker, an open source piece of software that tries to automate the method of deploying applications inside Linux containers. It does this by offering an abstraction layer over LXC. Its architecture is better described in this image: As you can see in the preceding diagram, this software package is able to use the resources of the operating system. Here, I am referring to the functionalities of the Linux kernel and have isolated other applications from the operating system. It can do this either through LXC or other alternatives, such as libvirt and systemd-nspawn, which are seen as indirect implementations. It can also do this directly through the libcontainer library, which has been around since the 0.9 version of Docker. Docker is a great component if you want to obtain automation for distributed systems, such as large-scale web deployments, service-oriented architectures, continuous deployment systems, database clusters, private PaaS, and so on. More information about its use cases is available at https://www.docker.com/resources/usecases/. Make sure you take a look at this website; interesting information is often here. After finishing with the meta-virtualization layer, I will move next to the meta-cloud-services layer that contains various elements. I will start with Simple Protocol for Independent Computing Environments (Spice). This can be translated into a remote-display system for virtualized desktop devices. It initially started as a closed source software, and in two years it was decided to make it open source. It then became an open standard to interaction with devices, regardless of whether they are virtualized one not. It is built on a client-server architecture, making it able to deal with both physical and virtualized devices. The interaction between backend and frontend is realized through VD-Interfaces (VDI), and as shown in the following diagram, its current focus is the remote access to QEMU/KVM virtual machines: Next on the list is oVirt, a virtualization platform that offers a web interface. It is easy to use and helps in the management of virtual machines, virtualized networks, and storages. Its architecture consists of an oVirt Engine and multiple nodes. The engine is the component that comes equipped with a user-friendly interface to manage logical and physical resources. It also runs the virtual machines that could be either oVirt nodes, Fedora, or CentOS hosts. The only downfall of using oVirt is that it only offers support for a limited number of hosts, as follows: Fedora 20 CentOS 6.6, 7.0 Red Hat Enterprise Linux 6.6, 7.0 Scientific Linux 6.6, 7.0 As a tool, it is really powerful. It offers integration with libvirt for Virtual Desktops and Servers Manager (VDSM) communications with virtual machines and also support for SPICE communication protocols that enable remote desktop sharing. It is a solution that was started and is mainly maintained by Red Hat. It is the base element of their Red Hat Enterprise Virtualization (RHEV), but one thing is interesting and should be watched out for is that Red Hat now is not only a supporter of projects, such as oVirt and Aeolus, but has also been a platinum member of the OpenStack foundation since 2012. For more information on projects, such as oVirt, Aeolus, and RHEV, the following links can be useful to you: http://www.redhat.com/promo/rhev3/?sc_cid=70160000000Ty5wAAC&offer_id=70160000000Ty5NAAS, http://www.aeolusproject.org/ and http://www.ovirt.org/Home. I will move on to a different component now. Here, I am referring to the open source implementation of the Lightweight Directory Access Protocol, simply called openLDAP. Although it has a somewhat controverted license called OpenLDAP Public License, which is similar in essence to the BSD license, it is not recorded at opensource.org, making it uncertified by Open Source Initiative (OSI). This software component comes as a suite of elements, as follows: A standalone LDAP daemon that plays the role of a server called slapd A number of libraries that implement the LDAP protocol Last but not the least, a series of tools and utilities that also have a couple of clients samples between them There are also a number of additions that should be mentioned, such as ldapc++ and libraries written in C++, JLDAP and the libraries written in Java; LMDB, a memory mapped database library; Fortress, a role-based identity management; SDK, also written in Java; and a JDBC-LDAP Bridge driver that is written in Java and called JDBC-LDAP. Cyrus-SASL is a generic client-server library implementation for Simple Authentication and Security Layer (SASL) authentication. It is a method used for adding authentication support for connection-based protocols. A connection-based protocol adds a command that identifies and authenticates a user to the requested server and if negotiation is required, an additional security layer is added between the protocol and the connection for security purposes. More information about SASL is available in the RFC 2222, available at http://www.ietf.org/rfc/rfc2222.txt. For a more detailed description of Cyrus SASL, refer to http://www.sendmail.org/~ca/email/cyrus/sysadmin.html. Qpid is a messaging tool developed by Apache, which understands Advanced Message Queueing Protocol (AMQP) and has support for various languages and platforms. AMQP is an open source protocol designed for high-performance messaging over a network in a reliable fashion. More information about AMQP is available at http://www.amqp.org/specification/1.0/amqp-org-download. Here, you can find more information about the protocol specifications as well as about the project in general. Qpid projects push the development of AMQP ecosystems and this is done by offering message brokers and APIs that can be used in any developer application that intends to use AMQP messaging part of their product. To do this, the following can be done: Letting the source code open source. Making AMQP available for a large variety of computing environments and programming languages. Offering the necessary tools to simplify the development process of an application. Creating a messaging infrastructure to make sure that other services can integrate well with the AMQP network. Creating a messaging product that makes integration with AMQP trivial for any programming language or computing environment. Make sure that you take a look at Qpid Proton at http://qpid.apache.org/proton/overview.html for this. More information about the the preceding functionalities can be found at http://qpid.apache.org/components/index.html#messaging-apis. RabbitMQ is another message broker software component that implements AMQP, which is also available as open source. It has a number of components, as follows: The RabbitMQ exchange server Gateways for HTTP, Streaming Text Oriented Message Protocol (STOMP) and Message Queue Telemetry Transport (MQTT) AMQP client libraries for a variety of programming languages, most notably Java, Erlang, and .Net Framework A plugin platform for a number of custom components that also offer a collection of predefined one: Shovel: It is a plugin that executes the copy/move operation for messages between brokers Management: It enables the control and monitoring of brokers and clusters of brokers Federation: It enables sharing at the exchange level of messages between brokers You can find out more information regarding RabbitMQ by referring to the RabbitMQ documentation article at http://www.rabbitmq.com/documentation.html. Comparing the two, Qpid and RabbitMQ, it can be concluded that RabbitMQ is better and also that it has a fantastic documentation. This makes it the first choice for the OpenStack Foundation as well as for readers interested in benchmarking information for more than these frameworks. It is also available at http://blog.x-aeon.com/2013/04/10/a-quick-message-queue-benchmark-activemq-rabbitmq-hornetq-qpid-apollo/. One such result is also available in this image for comparison purposes: The next element is puppet, an open source configuration management system that allows IT infrastructure to have certain states defined and also enforce these states. By doing this, it offers a great automation system for system administrators. This project is developed by the Puppet Labs and was released under GNU General Public License until version 2.7.0. After this, it moved to the Apache License 2.0 and is now available in two flavors: The open source puppet version: It is mostly similar to the preceding tool and is capable of configuration management solutions that permit for definition and automation of states. It is available for both Linux and UNIX as well as Max OS X and Windows. The puppet enterprise edition: It is a commercial version that goes beyond the capabilities of the open source puppet and permits the automation of the configuration and management process. It is a tool that defines a declarative language for later use for system configuration. It can be applied directly on the system or even compiled as a catalogue and deployed on a target using a client-server paradigm, which is usually the REST API. Another component is an agent that enforces the resources available in the manifest. The resource abstraction is, of course, done through an abstraction layer that defines the configuration through higher lever terms that are very different from the operating system-specific commands. If you visit http://docs.puppetlabs.com/, you will find more documentation related to Puppet and other Puppet Lab tools. With all this in place, I believe it is time to present the main component of the meta-cloud-services layer, called OpenStack. It is a cloud operating system that is based on controlling a large number of components and together it offers pools of compute, storage, and networking resources. All of them are managed through a dashboard that is, of course, offered by another component and offers administrators control. It offers users the possibility of providing resources from the same web interface. Here is an image depicting the Open Source Cloud operating System, which is actually OpenStack: It is primarily used as an IaaS solution, its components are maintained by the OpenStack Foundation, and is available under Apache License version 2. In the Foundation, today, there are more than 200 companies that contribute to the source code and general development and maintenance of the software. At the heart of it, all are staying its components Also, each component has a Python module used for simple interaction and automation possibilities: Compute (Nova): It is used for the hosting and management of cloud computing systems. It manages the life cycles of the compute instances of an environment. It is responsible for the spawning, decommissioning, and scheduling of various virtual machines on demand. With regard to hypervisors, KVM is the preferred option but other options such as Xen and VMware are also viable. Object Storage (Swift): It is used for storage and data structure retrieval via RESTful and the HTTP API. It is a scalable and fault-tolerant system that permits data replication with objects and files available on multiple disk drives. It is developed mainly by an object storage software company called SwiftStack. Block Storage (Cinder): It provides persistent block storage for OpenStack instances. It manages the creation and attach and detach actions for block devices. In a cloud, a user manages its own devices, so a vast majority of storage platforms and scenarios should be supported. For this purpose, it offers a pluggable architecture that facilitates the process. Networking (Neutron): It is the component responsible for network-related services, also known as Network Connectivity as a Service. It provides an API for network management and also makes sure that certain limitations are prevented. It also has an architecture based on pluggable modules to ensure that as many networking vendors and technologies as possible are supported. Dashboard (Horizon): It provides web-based administrators and user graphical interfaces for interaction with the other resources made available by all the other components. It is also designed keeping extensibility in mind because it is able to interact with other components responsible for monitoring and billing as well as with additional management tools. It also offers the possibility of rebranding according to the needs of commercial vendors. Identity Service (Keystone): It is an authentication and authorization service It offers support for multiple forms of authentication and also existing backend directory services such as LDAP. It provides a catalogue for users and the resources they can access. Image Service (Glance): It is used for the discovery, storage, registration, and retrieval of images of virtual machines. A number of already stored images can be used as templates. OpenStack also provides an operating system image for testing purposes. Glance is the only module capable of adding, deleting, duplicating, and sharing OpenStack images between various servers and virtual machines. All the other modules interact with the images using the available APIs of Glance. Telemetry (Ceilometer): It is a module that provides billing, benchmarking, and statistical results across all current and future components of OpenStack with the help of numerous counters that permit extensibility. This makes it a very scalable module. Orchestrator (Heat): It is a service that manages multiple composite cloud applications with the help of various template formats, such as Heat Orchestration Templates (HOT) or AWS CloudFormation. The communication is done both on a CloudFormation compatible Query API and an Open Stack REST API. Database (Trove): It provides Cloud Database as service functionalities that are both reliable and scalable. It uses relational and nonrelational database engines. Bare Metal Provisioning (Ironic): It is a components that provides virtual machine support instead of bare metal machines support. It started as a fork of the Nova Baremetal driver and grew to become the best solution for a bare-metal hypervisor. It also offers a set of plugins for interaction with various bare-metal hypervisors. It is used by default with PXE and IPMI, but of course, with the help of the available plugins it can offer extended support for various vendor-specific functionalities. Multiple Tenant Cloud Messaging (Zaqar): It is, as the name suggests, a multitenant cloud messaging service for the web developers who are interested in Software as a Service (SaaS). It can be used by them to send messages between various components by using a number of communication patterns. However, it can also be used with other components for surfacing events to end users as well as communication in the over-cloud layer. Its former name was Marconi and it also provides the possibility of scalable and secure messaging. Elastic Map Reduce (Sahara): It is a module that tries to automate the method of providing the functionalities of Hadoop clusters. It only requires the defines for various fields, such as Hadoop versions, various topology nodes, hardware details, and so on. After this, in a few minutes, a Hadoop cluster is deployed and ready for interaction. It also offers the possibility of various configurations after deployment. Having mentioned all this, maybe you would not mind if a conceptual architecture is presented in the following image to present to you with ways in which the above preceding components are interacted with. To automate the deployment of such an environment in a production environment, automation tools, such as the previously mentioned Puppet tool, can be used. Take a look at this diagram: Now, let's move on and see how such a system can be deployed using the functionalities of the Yocto Project. For this activity to start, all the required metadata layers should be put together. Besides the already available Poky repository, other ones are also required and they are defined in the layer index on OpenEmbedded's website because this time, the README file is incomplete: git clone –b dizzy git://git.openembedded.org/meta-openembedded git clone –b dizzy git://git.yoctoproject.org/meta-virtualization git clone –b icehouse git://git.yoctoproject.org/meta-cloud-services source oe-init-build-env ../build-controller After the appropriate controller build is created, it needs to be configured. Inside the conf/layer.conf file, add the corresponding machine configuration, such as qemux86-64, and inside the conf/bblayers.conf file, the BBLAYERS variable should be defined accordingly. There are extra metadata layers, besides the ones that are already available. The ones that should be defined in this variable are: meta-cloud-services meta-cloud-services/meta-openstack-controller-deploy meta-cloud-services/meta-openstack meta-cloud-services/meta-openstack-qemu meta-openembedded/meta-oe meta-openembedded/meta-networking meta-openembedded/meta-python meta-openembedded/meta-filesystem meta-openembedded/meta-webserver meta-openembedded/meta-ruby After the configuration is done using the bitbake openstack-image-controller command, the controller image is built. The controller can be started using the runqemu qemux86-64 openstack-image-controller kvm nographic qemuparams="-m 4096" command. After finishing this activity, the deployment of the compute can be started in this way: source oe-init-build-env ../build-compute With the new build directory created and also since most of the work of the build process has already been done with the controller, build directories such as downloads and sstate-cache, can be shared between them. This information should be indicated through DL_DIR and SSTATE_DIR. The difference between the two conf/bblayers.conf files is that the second one for the build-compute build directory replaces meta-cloud-services/meta-openstack-controller-deploy with meta-cloud-services/meta-openstack-compute-deploy. This time the build is done with bitbake openstack-image-compute and should be finished faster. Having completed the build, the compute node can also be booted using the runqemu qemux86-64 openstack-image-compute kvm nographic qemuparams="-m 4096 –smp 4" command. This step implies the image loading for OpenStack Cirros as follows: wget download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img scp cirros-0.3.2-x86_64-disk.img root@<compute_ip_address>:~ ssh root@<compute_ip_address> ./etc/nova/openrc glance image-create –name "TestImage" –is=public true –container-format bare –disk-format qcow2 –file /home/root/cirros-0.3.2-x86_64-disk.img Having done all of this, the user is free to access the Horizon web browser using http://<compute_ip_address>:8080/ The login information is admin and the password is password. Here, you can play and create new instances, interact with them, and, in general, do whatever crosses your mind. Do not worry if you've done something wrong to an instance; you can delete it and start again. The last element from the meta-cloud-services layer is the Tempest integration test suite for OpenStack. It is represented through a set of tests that are executed on the OpenStack trunk to make sure everything is working as it should. It is very useful for any OpenStack deployments. More information about Tempest is available at https://github.com/openstack/tempest. Summary In this article, you were not only presented with information about a number of virtualization concepts, such as NFV, SDN, VNF, and so on, but also a number of open source components that contribute to everyday virtualization solutions. I offered you examples and even a small exercise to make sure that the information remains with you even after reading this book. I hope I made some of you curious about certain things. I also hope that some of you documented on projects that were not presented here, such as the OpenDaylight (ODL) initiative, that has only been mentioned in an image as an implementation suggestion. If this is the case, I can say I fulfilled my goal. Resources for Article: Further resources on this subject: Veil-Evasion [article] Baking Bits with Yocto Project [article] An Introduction to the Terminal [article]
Read more
  • 0
  • 0
  • 32109

article-image-extending-chef
Packt
07 Jul 2015
34 min read
Save for later

Extending Chef

Packt
07 Jul 2015
34 min read
In this article by Mayank Joshi, the author of Mastering Chef, we'll learn how to go about building custom Knife plugins and we'll also see how we can write custom handlers that can help us extend the functionality provided by a chef-client run to report any issues with a chef-client run. (For more resources related to this topic, see here.) Custom Knife plugins Knife is one of the most widely used tools in the Chef ecosystem. Be it managing your clients, nodes, cookbooks, environments, roles, users, or handling stuff such as provisioning machines in Cloud environments such as Amazon AWS, Microsoft Azure, and so on, there is a way to go about doing all of these things through Knife. However, Knife, as provided during installation of Chef, isn't capable of performing all these tasks on its own. It comes with a basic set of functionalities, which helps provide an interface between the local Chef repository, workstation and the Chef server. The following are the functionalities, which is provided, by default, by the Knife executable: Management of nodes Management of clients and users Management of cookbooks, roles, and environments Installation of chef-client on the nodes through bootstrapping Searching for data that is indexed on the Chef server. However, apart from these functions, there are plenty more functions that can be performed using Knife; all this is possible through the use of plugins. Knife plugins are a set of one (or more) subcommands that can be added to Knife to support an additional functionality that is not built into the base set of Knife subcommands. Most of the Knife plugins are initially built by users such as you, and over a period of time, they are incorporated into the official Chef code base. A Knife plugin is usually installed into the ~/.chef/plugins/knife directory, from where it can be executed just like any other Knife subcommand. It can also be loaded from the .chef/plugins/knife directory in the Chef repository or if it's installed through RubyGems, it can be loaded from the path where the executable is installed. Ideally, a plugin should be kept in the ~/.chef/plugins/knife directory so that it's reusable across projects, and also in the .chef/plugins/knife directory of the Chef repository so that its code can be shared with other team members. For distribution purpose, it should ideally be distributed as a Ruby gem. The skeleton of a Knife plugin A Knife plugin is structured somewhat like this: require 'chef/knife'   module ModuleName class ClassName < Chef::Knife      deps do      require 'chef/dependencies'    end      banner "knife subcommand argument VALUE (options)"      option :name_of_option      :short => "-l value",      :long => "--long-option-name value",      :description => "The description of the option",      :proc => Proc.new { code_to_be_executed },      :boolean => true | false,      :default => default_value      def run      #Code    end end end Let's look at this skeleton, one line at a time: require: This is used to require other Knife plugins required by a new plugin. module ModuleName: This defines the namespace in which the plugin will live. Every Knife plugin lives in its own namespace. class ClassName < Chef::Knife: This declares that a plugin is a subclass of Knife. deps do: This defines a list of dependencies. banner: This is used to display a message when a user enters Knife subcommand –help. option :name_of_option: This defines all the different command line options available for this new subcommand. def run: This is the place in which we specify the Ruby code that needs to be executed. Here are the command-line options: :short defines the short option name :long defines the long option name :description defines a description that is displayed when a user enters knife subclassName –help :boolean defines whether an option is true or false; if the :short and :long names define value, then this attribute should not be used :proc defines the code that determines the value for this option :default defines a default value The following example shows a part of a Knife plugin named knife-windows: require 'chef/knife' require 'chef/knife/winrm_base'base'   class Chef class Knife    class Winrm < Knife        include Chef::Knife::WinrmBase        deps do        require 'readline'        require 'chef/search/query'        require 'em-winrm'      end        attr_writer :password        banner "knife winrm QUERY COMMAND (options)"        option :attribute,        :short => "-a ATTR",        :long => "--attribute ATTR",        :description => "The attribute to use for opening the connection - default is fqdn",        :default => "fqdn"        ... # more options        def session        session_opts = {}        session_opts[:logger] = Chef::Log.logger if Chef::Log.level == :debug        @session ||= begin          s = EventMachine::WinRM::Session.new(session_opts)          s.on_output do |host, data|            print_data(host, data)          end          s.on_error do |host, err|            print_data(host, err, :red)          end          s.on_command_complete do |host|             host = host == :all ? 'All Servers' : host            Chef::Log.debug("command complete on #{host}")          end          s        end        end        ... # more def blocks      end end end Namespace As we saw with skeleton, the Knife plugin should have its own namespace and the namespace is declared using the module method as follows: require 'chef/knife' #Any other require, if needed   module NameSpace class SubclassName < Chef::Knife Here, the plugin is available under the namespace called NameSpace. One should keep in mind that Knife loads the subcommand irrespective of the namespace to which it belongs. Class name The class name declares a plugin as a subclass of both Knife and Chef. For example: class SubclassName < Chef::Knife The capitalization of the name is very important. The capitalization pattern can be used to define the word grouping that makes the best sense for the use of a plugin. For example, if we want our plugin subcommand to work as follows: knife bootstrap hdfs We should have our class name as: BootstrapHdfs. If, say, we used a class name such as BootStrapHdfs, then our subcommand would be as follows: knife boot strap hdfs It's important to remember that a plugin can override an existing Knife subcommand. For example, we already know about commands such as knife cookbook upload. If you want to override the current functionality of this command, all you need to do is create a new plugin with the following name: class CookbookUpload < Chef::Knife Banner Whenever a user enters the knife –help command, he/she is presented with a list of available subcommands. For example: knife --help Usage: knife sub-command (options)    -s, --server-url URL             Chef Server URL Available subcommands: (for details, knife SUB-COMMAND --help)   ** BACKUP COMMANDS ** knife backup export [COMPONENT [COMPONENT ...]] [-D DIR] (options) knife backup restore [COMPONENT [COMPONENT ...]] [-D DIR] (options)   ** BOOTSTRAP COMMANDS ** knife bootstrap FQDN (options) .... Let us say we are creating a new plugin and we would want Knife to be able to list it when a user enters the knife –help command. To accomplish this, we would need to make use of banner. For example, let's say we've a plugin called BootstrapHdfs with the following code: module NameSpace class BootstrapHdfs < Chef::Knife    ...    banner "knife bootstrap hdfs (options)"    ... end end Now, when a user enters the knife –help command, he'll see the following output: ** BOOTSTRAPHDFS COMMANDS ** knife bootstrap hdfs (options) Dependencies Reusability is one of the key paradigms in development and the same is true for Knife plugins. If you want a functionality of one Knife plugin to be available in another, you can use the deps method to ensure that all the necessary files are available. The deps method acts like a lazy loader, and it ensures that dependencies are loaded only when a plugin that requires them is executed. This is one of the reasons for using deps over require, as the overhead of the loading classes is reduced, thereby resulting in code with a lower memory footprint; hence, faster execution. One can use the following syntax to specify dependencies: deps do require 'chef/knife/name_of_command' require 'chef/search/query' #Other requires to fullfill dependencies end Requirements One can acquire the functionality available in other Knife plugins using the require method. This method can also be used to require the functionality available in other external libraries. This method can be used right at the beginning of the plugin script, however, it's always wise to use it inside deps, or else the libraries will be loaded even when they are not being put to use. The syntax to use require is fairly simple, as follows: require 'path_from_where_to_load_library' Let's say we want to use some functionalities provided by the bootstrap plugin. In order to accomplish this, we will first need to require the plugin: require 'chef/knife/bootstrap' Next, we'll need to create an object of that plugin: obj = Chef::Knife::Bootstrap.new Once we've the object with us, we can use it to pass arguments or options to that object. This is accomplished by changing the object's config and the name_arg variables. For example: obj.config[:use_sudo] = true Finally, we can run the plugin using the run method as follows: obj.run Options Almost every other Knife plugin accepts some command line option or other. These options can be added to a Knife subcommand using the option method. An option can have a Boolean value, string value, or we can even write a piece of code to determine the value of an option. Let's see each of them in action once: An option with a Boolean value (true/false): option :true_or_false, :short => "-t", :long => "—true-or-false", :description => "True/False?", :boolean => true | false, :default => true Here is an option with a string value: option :some_string_value, :short => "-s VALUE", :long => "—some-string-value VALUE", :description => "String value", :default => "xyz" An option where a code is used to determine the option's value: option :tag, :short => "-T T=V[,T=V,...]", :long => "—tags Tag=Value[,Tag=Value,...]", :description => "A list of tags", :proc => Proc.new { |tags| tag.split(',') } Here the proc attribute will convert a list of comma-separated values into an array. All the options that are sent to the Knife subcommand through a command line are available in form of a hash, which can be accessed using the config method. For example, say we had an option: option :option1 :short => "-s VALUE", :long => "—some-string-value VALUE", :description => "Some string value for option1", :default => "option1" Now, while issuing the Knife subcommand, say a user entered something like this: $ knife subcommand –option1 "option1_value" We can access this value for option1 in our Knife plugin run method using config[:option1] When a user enters the knife –help command, the description attributes are displayed as part of help. For example: **EXAMPLE COMMANDS** knife example -s, --some-type-of-string-value    This is not a random string value. -t, --true-or-false                 Is this value true? Or is this value false? -T, --tags                         A list of tags associated with the virtual machine. Arguments A Knife plugin can also accept the command-line arguments that aren't specified using the option flag, for example, knife node show NODE. These arguments are added using the name_args method: require 'chef/knife' module MyPlugin class ShowMsg << Chef::Knife    banner 'knife show msg MESSAGE'    def run      unless name_args.size == 1      puts "You need to supply a string as an argument."        show_usage        exit 1      end      msg = name_args.join(" ")      puts msg    end end end Let's see this in action: knife show msg You need to supply a string as an argument. USAGE: knife show msg MESSAGE    -s, --server-url URL             Chef Server URL        --chef-zero-host HOST       Host to start chef-zero on ... Here, we didn't pass any argument to the subcommand and, rightfully, Knife sent back a message saying You need to supply a string as an argument. Now, let's pass a string as an argument to the subcommand and see how it behaves: knife show msg "duh duh" duh duh Under the hood what's happening is that name_args is an array, which is getting populated by the arguments that we have passed in the command line. In the last example, the name_args array would've contained two entries ("duh","duh"). We use the join method of the Array class to create a string out of these two entities and, finally, print the string. The run method Every Knife plugin will have a run method, which will contain the code that will be executed when the user executes the subcommand. This code contains the Ruby statements that are executed upon invocation of the subcommand. This code can access the options values using the config[:option_hash_symbol_name] method. Search inside a custom Knife plugin Search is perhaps one of the most powerful and most used functionalities provided by Chef. By incorporating a search functionality in our custom Knife plugin, we can accomplish a lot of tasks, which would otherwise take a lot of efforts to accomplish. For example, say we have classified our infrastructure into multiple environments and we want a plugin that can allow us to upload a particular file or folder to all the instances in a particular environment on an ad hoc basis, without invoking a full chef-client run. This kind of stuff is very much doable by incorporating a search functionality into the plugin and using it to find the right set of nodes in which you want to perform a certain operation. We'll look at one such plugin in the next section. To be able to use Chef's search functionality, all you need to do is to require the Chef's query class and use an object of the Chef::Search::Query class to execute a query against the Chef server. For example: require 'chef/search/query' query_object = Chef::Search::Query.new query = 'chef_environment:production' query_object.search('node',query) do |node| puts "Node name = #{node.name}" end Since the name of a node is generally FQDN, you can use the values returned in node.name to connect to remote machines and use any library such as net-scp to allow users to upload their files/folders to a remote machine. We'll try to accomplish this task when we write our custom plugin at the end of this article. We can also use this information to edit nodes. For example, say we had a set of machines acting as web servers. Initially, all these machines were running Apache as a web server. However, as the requirements changed, we wanted to switch over to Nginx. We can run the following piece of code to accomplish this task: require 'chef/search/query'   query_object = Chef::Search::Query.new query = 'run_list:*recipe\[apache2\]*' query_object.search('node',query) do |node| ui.msg "Changing run_list to recipe[nginx] for #{node.name}" node.run_list("recipe[nginx]") node.save ui.msg "New run_list: #{node.run_list}" end knife.rb settings Some of the settings defined by a Knife plugin can be configured so that they can be set inside the knife.rb script. There are two ways to go about doing this: By using the :proc attribute of the option method and code that references Chef::Config[:knife][:setting_name] By specifying the configuration setting directly within the def Ruby blocks using either Chef::Config[:knife][:setting_name] or config[:setting_name] An option that is defined in this way can be configured in knife.rb by using the following syntax: knife [:setting_name] This approach is especially useful when a particular setting is used a lot. The precedence order for the Knife option is: The value passed via a command line. The value saved in knife.rb The default value. The following example shows how the Knife bootstrap command uses a value in knife.rb using the :proc attribute: option :ssh_port :short => '-p PORT', :long => '—ssh-port PORT', :description => 'The ssh port', :proc => Proc.new { |key| Chef::Config[:knife][:ssh_port] = key } Here Chef::Config[:knife][:ssh_port] tells Knife to check the knife.rb file for a knife[:ssh_port] setting. The following example shows how the Knife bootstrap command calls the knife ssh subcommand for the actual SSH part of running a bootstrap operation: def knife_ssh ssh = Chef::Knife::Ssh.new ssh.ui = ui ssh.name_args = [ server_name, ssh_command ] ssh.config[:ssh_user] = Chef::Config[:knife][:ssh_user] || config[:ssh_user] ssh.config[:ssh_password] = config[:ssh_password] ssh.config[:ssh_port] = Chef::Config[:knife][:ssh_port] || config[:ssh_port] ssh.config[:ssh_gateway] = Chef::Config[:knife][:ssh_gateway] || config[:ssh_gateway] ssh.config[:identity_file] = Chef::Config[:knife][:identity_file] || config[:identity_file] ssh.config[:manual] = true ssh.config[:host_key_verify] = Chef::Config[:knife][:host_key_verify] || config[:host_key_verify] ssh.config[:on_error] = :raise ssh end Let's take a look at the preceding code: ssh = Chef::Knife::Ssh.new creates a new instance of the Ssh subclass named ssh A series of settings in Knife ssh are associated with a Knife bootstrap using the ssh.config[:setting_name] syntax Chef::Config[:knife][:setting_name] tells Knife to check the knife.rb file for various settings It also raises an exception if any aspect of the SSH operation fails User interactions The ui object provides a set of methods that can be used to define user interactions and to help ensure a consistent user experience across all different Knife plugins. One should make use of these methods, rather than handling user interactions manually. Method Description ui.ask(*args, &block) The ask method calls the corresponding ask method of the HighLine library. More details about the HighLine library can be found at http://www.rubydoc.info/gems/highline/1.7.2. ui.ask_question(question, opts={}) This is used to ask a user a question. If :default => default_value is passed as a second argument, default_value will be used if the user does not provide any answer. ui.color (string, *colors) This method is used to specify a color. For example: server = connections.server.create(server_def)   puts "#{ui.color("Instance ID", :cyan)}: #{server.id}"   puts "#{ui.color("Flavor", :cyan)}: #{server.flavor_id}"   puts "#{ui.color("Image", :cyan)}: #{server.image_id}"   ...   puts "#{ui.color("SSH Key", :cyan)}: #{server.key_name}" print "n#{ui.color("Waiting for server", :magenta)}" ui.color?() This indicates that the colored output should be used. This is only possible if an output is sent across to a terminal. ui.confirm(question,append_instructions=true) This is used to ask (Y/N) questions. If a user responds back with N, the command immediately exits with the status code 3. ui.edit_data(data,parse_output=true) This is used to edit data. This will result in firing up of an editor. ui.edit_object(class,name) This method provides a convenient way to download an object, edit it, and save it back to the Chef server. It takes two arguments, namely, the class of object to edit and the name of object to edit. ui.error This is used to present an error to a user. ui.fatal This is used to present a fatal error to a user. ui.highline This is used to provide direct access to a highline object provided by many ui methods. ui.info This is used to present information to a user. ui.interchange This is used to determine whether the output is in a data interchange format such as JSON or YAML. ui.list(*args) This method is a way to quickly and easily lay out lists. This method is actually a wrapper to the list method provided by the HighLine library. More details about the HighLine library can be found at http://www.rubydoc.info/gems/highline/1.7.2. ui.msg(message) This is used to present a message to a user. ui.output(data) This is used to present a data structure to a user. This makes use of a generic default presenter. ui.pretty_print(data) This is used to enable the pretty_print output for JSON data. ui.use_presenter(presenter_class) This is used to specify a custom output presenter. ui.warn(message) This is used to present a warning to a user. For example, to show a fatal error in a plugin in the same way that it would be shown in Knife, do something similar to the following: unless name_args.size == 1    ui.fatal "Fatal error !!!"    show_usage    exit 1 end Exception handling In most cases, the exception handling available within Knife is enough to ensure that the exception handling for a plugin is consistent across all the different plugins. However, if the required one can handle exceptions in the same way as any other Ruby program, one can make use of the begin-end block, along with rescue clauses, to tell Ruby which exceptions we want to handle. For example: def raise_and_rescue begin    puts 'Before raise'    raise 'An error has happened.'    puts 'After raise' rescue    puts 'Rescued' end puts 'After begin block' end   raise_and_rescue If we were to execute this code, we'd get the following output: ruby test.rb Before raise Rescued After begin block A simple Knife plugin With the knowledge about how Knife's plugin system works, let's go about writing our very own custom Knife plugin, which can be quite useful for some users. Before we jump into the code, let's understand the purpose that this plugin is supposed to serve. Let's say we've a setup where our infrastructure is distributed across different environments and we've also set up a bunch of roles, which are used while we try to bootstrap the machines using Chef. So, there are two ways in which a user can identify machines: By environments By roles Actually, any valid Chef search query that returns a node list can be the criteria to identify machines. However, we are limiting ourselves to these two criteria for now. Often, there are situations where a user might want to upload a file or folder to all the machines in a particular environment, or to all the machines belonging to a particular role. This plugin will help users accomplish this task with lots of ease. The plugin will accept three arguments. The first one will be a key-value pair with the key being chef_environment or a role, the second argument will be a path to the file or folder that is required to be uploaded, and the third argument will be the path on a remote machine where the files/folders will be uploaded to. The plugin will use Chef's search functionality to find the FQDN of machines, and eventually make use of the net-scp library to transfer the file/folder to the machines. Our plugin will be called knife-scp and we would like to use it as follows: knife scp chef_environment:production /path_of_file_or_folder_locally /path_on_remote_machine Here is the code that can help us accomplish this feat: require 'chef/knife'   module CustomPlugins class Scp < Chef::Knife    banner "knife scp SEARCH_QUERY PATH_OF_LOCAL_FILE_OR_FOLDER PATH_ON_REMOTE_MACHINE"      option :knife_config_path,      :short => "-c PATH_OF_knife.rb",      :long => "--config PATH_OF_knife.rb",      :description => "Specify path of knife.rb",      :default => "~/.chef/knife.rb"      deps do      require 'chef/search/query'      require 'net/scp'      require 'parallel'    end      def run      if name_args.length != 3        ui.msg "Missing arguments! Unable to execute the command successfully."        show_usage        exit 1      end                  Chef::Config.from_file(File.expand_path("#{config[:knife_config_path]}"))      query = name_args[0]      local_path = name_args[1]      remote_path = name_args[2]      query_object = Chef::Search::Query.new      fqdn_list = Array.new      query_object.search('node',query) do |node|        fqdn_list << node.name      end      if fqdn_list.length < 1        ui.msg "No valid servers found to copy the files to"      end      unless File.exist?(local_path)        ui.msg "#{local_path} doesn't exist on local machine"        exit 1      end        Parallel.each((1..fqdn_list.length).to_a, :in_processes => fqdn_list.length) do |i|        puts "Copying #{local_path} to #{Chef::Config[:knife][:ssh_user]}@#{fqdn_list[i-1]}:#{remote_path} "        Net::SCP.upload!(fqdn_list[i-1],"#{Chef::Config[:knife][:ssh_user]}","#{local_path}","#{remote_path}",:ssh => { :keys => ["#{Chef::Config[:knife][:identity_file]}"] }, :recursive => true)      end    end end end This plugin uses the following additional gems: The parallel gem to execute statements in parallel. More information about this gem can be found at https://github.com/grosser/parallel. The net-scp gem to do the actual transfer. This gem is a pure Ruby implementation of the SCP protocol. More information about the gem can be found at https://github.com/net-ssh/net-scp. Both these gems and the Chef search library are required in the deps block to define the dependencies. This plugin accepts three command line arguments and uses knife.rb to get information about which user to connect over SSH and also uses knife.rb to fetch information about the SSH key file to use. All these command line arguments are stored in the name_args array. A Chef search is then used to find a list of servers that match the query, and eventually a parallel gem is used to parallely SCP the file from a local machine to a list of servers returned by a Chef query. As you can see, we've tried to handle a few error situations, however, there is still a possibility of this plugin throwing away errors as the Net::SCP.upload function can error out at times. Let's see our plugin in action: Case1: The file that is supposed to be uploaded doesn't exist locally. We expect the script to error out with an appropriate message: knife scp 'chef_environment:ft' /Users/mayank/test.py /tmp /Users/mayank/test.py doesn't exist on local machine Case2: The /Users/mayank/test folder is: knife scp 'chef_environment:ft' /Users/mayank/test /tmp Copying /Users/mayank/test to ec2-user@host02.ft.sychonet.com:/tmp Copying /Users/mayank/test to ec2-user@host01.ft.sychonet.com:/tmp Case3: A config other than /etc/chef/knife.rb is specified: knife scp -c /Users/mayank/.chef/knife.rb 'chef_environment:ft' /Users/mayank/test /tmp Copying /Users/mayank/test to ec2-user@host02.ft.sychonet.com:/tmp Copying /Users/mayank/test to ec2-user@host01.ft.sychonet.com:/tmp Distributing plugins using gems As you must have noticed, until now we've been creating our plugins under ~/.chef/plugins/knife. Though this is sufficient for plugins that are meant to be used locally, it's just not good enough to be distributed to a community. The most ideal way of distributing a Knife plugin is by packaging your plugin as a gem and distributing it via a gem repository such as rubygems.org. Even if publishing your gem to a remote gem repository sounds like a far-fetched idea, at least allowing people to install your plugin by building a gem locally and installing it via gem install. This is a far better way than people downloading your code from an SCM repository and copying it over to either ~/.chef/plugins/knife or any other folder they've configured for the purpose of searching for custom Knife plugins. With distributing your plugin using gems, you ensure that the plugin is installed in a consistent way and you can also ensure that all the required libraries are preinstalled before a plugin is ready to be consumed by users. All the details required to create a gem are contained in a file known as Gemspec, which resides at the root of your project's directory and is typically named the <project_name>.gemspec. Gemspec file that consists of the structure, dependencies, and metadata required to build your gem. The following is an example of a .gemspec file: Gem::Specification.new do |s| s.name = 'knife-scp' s.version = '1.0.0' s.date = '2014-10-23' s.summary = 'The knife-scp knife plugin' s.authors = ["maxcoder"] s.email = 'maxcoder@sychonet.com" s.files = ["lib/chef/knife/knife-scp.rb"] s.homepage = "https://github.com/maxc0d3r/knife-plugins" s.add_runtime_dependency "parallel","~> 1.2", ">= 1.2.0" s.add_runtime_dependency "net-scp","~> 1.2", ">= 1.2.0" end The s.files variable contains the list of files that will be deployed by a gem install command. Knife can load the files from gem_path/lib/chef/knife/<file_name>.rb, and hence we've kept the knife-scp.rb script in that location. The s.add_runtime_dependency dependency is used to ensure that the required gems are installed whenever a user tries to install our gem. Once the file is there, we can just run a gem build to build our gem file as follows: knife-scp git:(master) x gem build knife-scp.gemspec WARNING: licenses is empty, but is recommended. Use a license abbreviation from: http://opensource.org/licenses/alphabetical WARNING: See http://guides.rubygems.org/specification-reference/ for help Successfully built RubyGem Name: knife-scp Version: 1.0.0 File: knife-scp-1.0.0.gem The gem file is created and now, we can just use gem install knife-scp-1.0.0.gem to install our gem. This will also take care of the installation of any dependencies such as parallel, net-scp gems, and so on. You can find a source code for this plugin at the following location: https://github.com/maxc0d3r/knife-plugins. Once the gem has been installed, the user can run it as mentioned earlier. For the purpose of distribution of this gem, it can either be pushed using a local gem repository, or it can be published to https://rubygems.org/. To publish it to https://rubygems.org/, create an account there. Run the following command to log in using a gem: gem push This will ask for your email address and password. Next, push your gem using the following command: gem push your_gem_name.gem That's it! Now you should be able to access your gem at the following location: http://www.rubygems.org/gems/your_gem_name. As you might have noticed, we've not written any tests so far to check the plugin. It's always a good idea to write test cases before submitting your plugin to the community. It's useful both to the developer and consumers of the code, as both know that the plugin is going to work as expected. Gems support adding test files into the package itself so that tests can be executed when a gem is downloaded. RSpec is a popular choice to test a framework, however, it really doesn't matter which tool you use to test your code. The point is that you need to test and ship. Some popular Knife plugins, built by a community, and their uses, are as follows: knife-elb: This plugin allows the automation of the process of addition and deletion of nodes from Elastic Load Balancers on AWS. knife-inspect: This plugin allows you to see the difference between what's on a Chef server versus what's on a local Chef repository. knife-community: This plugin helps to deploy Chef cookbooks to Chef Supermarket. knife-block: This plugin allows you to configure and manage multiple Knife configuration files against multiple Chef servers. knife-tagbulk: This plugin allows bulk tag operations (creation or deletion) using standard Chef search queries. More information about the plugin can be found at: https://github.com/priestjim/knife-tagbulk. You can find a lot of other useful community-written plugins at: https://docs.chef.io/community_plugin_knife.html. Custom Chef handlers A Chef handler is used to identify different situations that might occur during a chef-client run, and eventually it instructs the chef-client on what it should do to handle these situations. There are three types of handlers in Chef: The exception handler: This is used to identify situations that have caused a chef-client run to fail. This can be used to send out alerts over an email or dashboard. The report handler: This is used to report back when a chef-client run has successfully completed. This can report details about the run, such as the number of resources updated, time taken for a chef-client run to complete, and so on. The start handler: This is used to run events at the beginning of a chef-client run. Writing custom Chef handlers is nothing more than just inheriting your class from Chef::Handler and overriding the report method. Let's say we want to send out an email every time a chef-client run breaks. Chef provides a failed? method to check the status of a chef-client run. The following is a very simple piece of code that will help us accomplish this: require 'net/smtp' module CustomHandler class Emailer < Chef::Handler    def send_email(to,opts={})      opts[:server] ||= 'localhost'      opts[:from] ||='maxcoder@sychonet.com'      opts[:subject] ||='Error'      opts[:body] ||= 'There was an error running chef-client'        msg = <<EOF      From: <#{opts[:from]}>      To: #{to}      Subject: #{opts[:subject]}        #{opts[:body]}      EOF        Net::SMTP.start(opts[:server]) do |smtp|        smtp.send_message msg, opts[:from], to      end    end      def report      name = node.name      subject = "Chef run failure on #{name}"      body = [run_status.formatted_exception]      body += ::Array(backtrace).join("n")      if failed?        send_email(          "ops@sychonet.com",          :subject => subject,          :body => body        )      end    end end end If you don't have the required libraries already installed on your machine, you'll need to make use of chef_gem to install them first before you actually make use of this code. With your handler code ready, you can make use of the chef_handler cookbook to install this custom handler. To do so, create a new cookbook, email-handler, and copy the file emailer.rb created earlier to the file's source. Once done, add the following recipe code: include_recipe 'chef_handler'   handler_path = node['chef_handler']['handler_path'] handler = ::File.join handler_path, 'emailer'   cookbook_file "#{handler}.rb" do source "emailer.rb" end   chef_handler "CustomHandler::Emailer" do source handler    action :enable end Now, just include this handler into your base role, or at the start of run_list and during the next chef-client run, if anything breaks, an email will be sent across to ops@sychonet.com. You can configure many different kinds of handlers like the ones that push notifications over to IRC, Twitter, and so on, or you may even write them for scenarios where you don't want to leave a component of a system in a state that is undesirable. For example, say you were in a middle of a chef-client run that adds/deletes collections from Solr. Now, you might not want to leave the Solr setup in a messed-up state if something were to go wrong with the provisioning process. In order to ensure that a system is in the right state, you can write your own custom handlers, which can be used to handle such situations and revert the changes done until now by the chef-client run. Summary In this article, we learned about how custom Knife plugins can be used. We also learned how we can write our own custom Knife plugin and distribute it by packaging it as a gem. Finally, we learned about custom Chef handlers and how they can be used effectively to communicate information and statistics about a chef-client run to users/admins, or handle any issues with a chef-client run. Resources for Article: Further resources on this subject: An Overview of Automation and Advent of Chef [article] Testing Your Recipes and Getting Started with ChefSpec [article] Chef Infrastructure [article]
Read more
  • 0
  • 0
  • 2233
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
Packt
06 Jul 2015
10 min read
Save for later

Subtitles – tracking the video progression

Packt
06 Jul 2015
10 min read
In this article by Roberto Ulloa, author of the book Kivy – Interactive Applications and Games in Python Second Edition, we will learn how to use the progression of a video to display subtitles at the right moment. (For more resources related to this topic, see here.) Let's add subtitles to our application. We will do this in four simple steps: Create a Subtitle widget (subtitle.kv) derived from the Label class that will display the subtitles Place a Subtitle instance (video.kv) on top of the video widget Create a Subtitles class (subtitles.py) that will read and parse a subtitle file Track the Video progression (video.py) to display the corresponding subtitle The Step 1 involves the creation of a new widget in the subtitle.kv file: 1. # File name: subtitle.kv 2. <Subtitle@Label>: 3.     halign: 'center' 4.     font_size: '20px' 5.     size: self.texture_size[0] + 20, self.texture_size[1] + 20 6.     y: 50 7.     bcolor: .1, .1, .1, 0 8.     canvas.before: 9.         Color: 10.            rgba: self.bcolor 11.         Rectangle: 12.             pos: self.pos 13.             size: self.size There are two interesting elements in this code. The first one is the definition of the size property (line 4). We define it as 20 pixels bigger than the texture_size width and height. The texture_size property indicates the size of the text determined by the font size and text, and we use it to adjust the Subtitles widget size to its content. The texture_size is a read-only property because its value is calculated and dependent on other parameters, such as font size and height for text display. This means that we will read from this property but not write on it. The second element is the creation of the bcolor property (line 7) to store a background color, and how the rgba color of the rectangle has been bound to it (line 10). The Label widget (like many other widgets) doesn't have a background color, and creating a rectangle is the usual way to create such features. We add the bcolor property in order to change the color of the rectangle from outside the instance. We cannot directly modify parameters of the vertex instructions; however, we can create properties that control parameters inside the vertex instructions. Let's move on to Step 2 mentioned earlier. We need to add a Subtitle instance to our current Video widget in the video.kv file: 14. # File name: video.kv 15. ... 16. #:set _default_surl      "http://www.ted.com/talks/subtitles/id/97/lang/en" 18. <Video>: 19.     surl: _default_surl 20.     slabel: _slabel 21.     ... 23.     Subtitle: 24.         id: _slabel 25.         x: (root.width - self.width)/2 We added another constant variable called _default_surl (line 16), which contains the link to the URL with the corresponding subtitle TED video file. We set this value to the surl property (line 19), which we just created to store the subtitles' URL. We added the slabel property (line 20), that references the Subtitle instance through its ID (line 24). Then we made sure that the subtitle is centered (line 25). In order to start Step 3 (parse the subtitle file), we need to take a look at the format of the TED subtitles: 26. { 27.     "captions": [{ 28.         "duration":1976, 29.         "content": "When you have 21 minutes to speak,", 30.         "startOfParagraph":true, 31.         "startTime":0, 32.     }, ... TED uses a very simple JSON format (https://en.wikipedia.org/wiki/JSON) with a list of captions. Each caption contains four keys but we will only use duration, content, and startTime. We need to parse this file, and luckily Kivy provides a UrlRequest class (line 34) that will do most of the work for us. Here is the code for subtitles.py that creates the Subtitles class: 33. # File name: subtitles.py 34. from kivy.network.urlrequest import UrlRequest 36. class Subtitles: 38.     def __init__(self, url): 39.         self.subtitles = [] 40.         req = UrlRequest(url, self.got_subtitles) 42.     def got_subtitles(self, req, results): 43.         self.subtitles = results['captions'] 45.     def next(self, secs): 46.         for sub in self.subtitles: 47.             ms = secs*1000 - 12000 48.             st = 'startTime' 49.             d = 'duration' 50.             if ms >= sub[st] and ms <= sub[st] + sub[d]: 51.                 return sub 52.         return None The constructor of the Subtitles class will receive a URL (line 38) as a parameter. Then, it will make the petition to instantiate the UrlRequest class (line 40). The first parameter of the class instantiation is the URL of the petition, and the second is the method that is called when the result of the petition is returned (downloaded). Once the request returns the result, the method got_subtitles is called(line 42). The UrlRequest extracts the JSON and places it in the second parameter of got_subtitles. All we had to do is put the captions in a class attribute, which we called subtitles (line 43). The next method (line 45) receives the seconds (secs) as a parameter and will traverse the loaded JSON dictionary in order to search for the corresponding subtitle that belongs to that time. As soon as it finds one, the method returns it. We subtracted 12000 microseconds (line 47, ms = secs*1000 - 12000) because the TED videos have an introduction of approximately 12 seconds before the talk starts. Everything is ready for Step 4, in which we put the pieces together in order to see the subtitles working. Here are the modifications to the header of the video.py file: 53. # File name: video.py 54. ... 55. from kivy.properties import StringProperty 56. ... 57. from kivy.lang import Builder 59. Builder.load_file('subtitle.kv') 61. class Video(KivyVideo): 62.     image = ObjectProperty(None) 63.     surl = StringProperty(None) We imported StringProperty and added the corresponding property (line 55). We will use this property by the end of this chapter when we we can switch TED talks from the GUI. For now, we will just use _default_surl defined in video.kv (line 63). We also loaded the subtitle.kv file (line 59). Now, let's analyze the rest of the changes to the video.py file: 64.     ... 65.     def on_source(self, instance, value): 66.         self.color = (0,0,0,0) 67.         self.subs = Subtitles(name, self.surl) 68.         self.sub = None 70.     def on_position(self, instance, value): 71.         next = self.subs.next(value) 72.         if next is None: 73.             self.clear_subtitle() 74.         else: 75.             sub = self.sub 76.             st = 'startTime' 77.             if sub is None or sub[st] != next[st]: 78.                 self.display_subtitle(next) 80.     def clear_subtitle(self): 81.         if self.slabel.text != "": 82.             self.sub = None 83.             self.slabel.text = "" 84.             self.slabel.bcolor = (0.1, 0.1, 0.1, 0) 86.     def display_subtitle(self, sub): 87.         self.sub = sub 88.         self.slabel.text = sub['content'] 89.         self.slabel.bcolor = (0.1, 0.1, 0.1, .8) 90. (...) We introduced a few code lines to the on_source method in order to initialize the subtitles attribute with a Subtitles instance (line 67) using the surl property and initialize the sub attribute that contains the currently displayed subtitle (line 68), if any. Now, let's study how we keep track of the progression to display the corresponding subtitle. When the video plays inside the Video widget, the on_position event is triggered every second. Therefore, we implemented the logic to display the subtitles in the on_position method (lines 70 to 78). Each time the on_position method is called (each second), we ask the Subtitles instance (line 71) for the next subtitle. If nothing is returned, we clear the subtitle with the clear_subtitle method (line 73). If there is already a subtitle in the current second (line 74), then we make sure that there is no subtitle being displayed, or that the returned subtitle is not the one that we already display (line 164). If the conditions are met, we display the subtitle using the display_subtitle method (line 78). Notice that the clear_subtitle (lines 80 to 84) and display_subtitle (lines 86 to 89) methods use the bcolor property in order to hide the subtitle. This is another trick to make a widget invisible without removing it from its parent. Let's take a look at the current result of our videos and subtitles in the following screenshot: Summary In this article, we discussed how to control a video and how to associate the subtitles element of the screen with it. We also discussed how the Video widget incorporates synchronization of subtitles that we receive in a JSON format file with the progression of the video and a responsive control bar. We learned how to control its progression and add subtitles to it. Resources for Article: Further resources on this subject: Moving Further with NumPy Modules [article] Learning Selenium Testing Tools with Python [article] Python functions – Avoid repeating code [article]
Read more
  • 0
  • 0
  • 4049

article-image-developing-opencart-themes
Packt
06 Jul 2015
13 min read
Save for later

Developing OpenCart Themes

Packt
06 Jul 2015
13 min read
In this article by Rupak Nepali, author of the book OpenCart Theme and Module Development, we learn about the basic features of OpenCart and its functionalities. Similarly, you will learn about global methods used in OpenCart. (For more resources related to this topic, see here.) The features of OpenCart The latest version of OpenCart at the time of writing this article is 2.0.1.1, which boasts of a multitude of great features: Modern and fully responsive design, OCMod (virtual file modification) A redesigned admin area and frontend More payment gateways included in the standard download Event notification system Custom form fields An unlimited module instance system to increase functionality Its pre-existing features include the following: Open source nature Templatable for changing the presentation section It also supports: Downloadable products Unlimited categories, products, manufacturers Multilanguage Multicurrency Product reviews and ratings PCI-compliant Automatic image resizing Multiple tax rates related products Unlimited information pages Shipping weight calculation Discount coupon system It is search engine optimized and has backup and restoration tools. It provides features such as printable invoices, sales reports, error logging, multistore features, multiple marketplace integration tools such as OpenBay Pro, and many more. Now, let's start with some basic general setting that will be helpful in creating our theme and module. Advantages of using Bootstrap in OpenCart themes The following can be the advantages of using Bootstrap in an OpenCart 2 theme: Speeds up development and saves time: There are many ready-made components, such as those available at http://getbootstrap.com/components/, which can be used directly in the template like we can use buttons, alert messages, many typography tables, forms, and many JavaScript functionalities. These are made responsive by default. So, there is no need to spend much time checking each device, which ultimately helps decrease development time and save time. Responsiveness: Bootstrap is made for devices of all shapes. So, using the conventions provided by bootstrap, it is easy to gain responsiveness in the site and design. Can upgrade easily: If we create our OpenCart theme with bootstrap, we can easily upgrade Bootstrap with little effort. There is no need to invest lots of time searching for upgrades of CSS and devices. Things to remember while making an OpenCart theme Here are a few things to take care of before stepping into HTML and CSS to create an OpenCart theme: In the header section, you can include a logo section, search section, currency section, language section, category menu section as well as mini-cart section. You can also include links to the home page, wish list page, account pages, shopping cart page, and checkout page. You can even show telephone numbers. These are provided by default. In the footer section, you can include links to information pages, customer service pages (such as a contact us page), a return page, a site map page, extra links (such as brands, gift vouchers, affiliates, and specials), and links to pages such as my account page, the order history, wish list, and newsletter. Include CSS modules in the style sheet, such as .box, .box .box-heading, .box .box-content, and so on, as clients can add many extra modules to fulfill their requirements. So, if we do not include these, then the design of the extra module may be hampered. Include CSS that supports three-column structure as well as right-column-activated-two-column structure, left-column-activated-two-column structure, and one-column structure in such a way that the following happens: if the left columns are deactivated, then the right-column structure is activated. If the right columns are deactivated, then the left-column structure is activated. Finally, if both the columns are deactivated, then one column structure is activated. The following diagram shows the four styles of a theme: Include only the modified files and folders. If the CSS does not find the referenced files, then it takes them from the default folder. Try to create a folder structure like what is shown in this screenshot: Prepare CSS for the buttons, checkout steps, cart pages, table, heading, and carousel. Steps to create new theme based on default theme The following steps help create a new theme based on the default theme: Navigate to the catalog/view/themefolder and create a new folder. Let's name it packttheme. Now navigate to the catalog/view/theme/defaultfolder, copy the image and stylesheetfolder, go to the catalog/view/theme/packtthemefolder, and paste it here. Go to the catalog/view/theme/packtthemefolder and create a new folder, named template. Next, navigate to the catalog/view/theme/default/template folder, copy the commonfolder, go to the catalog/view/theme/packttheme/templatefolder, and paste the commonfolder there. Now the folder structure looks like this: Open catalog/view/theme/packttheme/header.tpl in your favorite text editor, and find and replace the word default with packttheme. After performing the replacement, save header.tpl, and your default base theme is ready for use. Now log in to the admin section and go to Administrator | System | Setting. Edit the store for which you wish to change the theme, and click on the Store tab. Then choose packttheme in the template select box. Next, click on save. Now refresh the frontend, and packttheme will be activated. Now we can make changes to the CSS or the template files, and see the changes. Sometimes, we need theme-specific JavaScript. In such cases, we can create a javascript folder, or another similar folder as per the requirement. Global library methods OpenCart has many predefined methods that can be called anywhere, for example, in controller, model, as well as view template files. You can find system-level library files at system/library/. We will show you how methods can be written and what their functions are. Affiliate (affiliate.php) You can find most of the affiliate code written in the affiliate section. You can check out the files at catalog/controller/affiliate/ and catalog/model/affiliate/. Here is a list of methods we can use for the affiliate library: When an e-mail and password are passed to this method, it logs in to the affiliate section if the username (e-mail) and password match among the affiliates. You can find this code at catalog/controller/affiliate/ login.php on validate method: $this->affiliate->login($email, $password); The affiliate gets logged out. This means the affiliate ID is cleared and its session is destroyed. Also, the affiliate's first name, last name, e-mail, telephone, and fax are given an empty value: $this->affiliate->logout(); Check whether the affiliate is logged in. If you like to show a message to the logged-in affiliate only, then you can use this code: $this->affiliate->isLogged(); When we echo the following line, it will show the ID of the active affiliate:: if ($this->affiliate->isLogged()){echo "Welcome to the Affiliate Section";} else {echo "You are not at Affiliate Section";}$this->affiliate->getId(); Changes made in the catalog folder The following changes need to be made in the catalogfolder: Go to catalog/model/shipping and copy weight.php. Paste it in the same folder and rename it to totalcost.php. Open it and find the following line: classModelShippingWeight extends Model { Change the class name to this: classModelShippingTotalcost extends Model { Now, find weightand replace all its occurrences with totalcost. After performing the replacement, find the following lines of code: $totalcost = $this->cart->gettotalcost(); Make the changes as shown here: $totalcost = $this->cart->getSubTotal(); Our requirement is to show the shipping cost as per the total cost purchased, so we have made the change you just saw. Now, find these lines of code: if ((string)$cost != '') { $quote_data['totalcost_' . $result['geo_zone_id']] = array( 'code'=>'totalcost.totalcost_'.$result['geo_zone_id'], 'title'=>$result['name'].'('.$this->language->get('text_ totalcost') . ' ' . $this->totalcost->format($totalcost, $this- >config->get('config_totalcost_class_id')) . ')', 'cost' => $cost, 'tax_class_id' => $this->config->get('totalcost_tax_class_id'), 'text'=> $this->currency->format($this->tax->calculate($cost, $this->config->get('totalcost_tax_class_id'), $this->config- >get('config_tax'))) ); } In them, consider the following lines: 'title'=>$result['name'].'('.$this->language->get('text_totalcost') . ' ' . $this->totalcost->format($totalcost, $this->config->get('config_totalcost_class_id')) . ')', Make this change, as we only need the name: 'title' => $result['name'], Weight has different classes, such as kilogram, gram, pound, and so on, but in our total cost purchased, we did not have any class specified so we removed it. Save the file. Go to catalog/language/english/shipping and copy weight.php. Paste it in the same folder and rename it to totalcost.php. Open it, find Weight, and replace it with Total Cost. With these changes, the module is ready to install. Go to Admin | Extensions | Shipping, and then find Total Cost Based Shipping. Click on Install and grant permission to modify and access to the user. Then, edit to con figure it. In the General tab, change the status to Enabled. Other tabs are loaded as per the geo zones setting. The default geo zones for OpenCart are set as UK Shipping and UK VAT. Now, insert the value for Total Cost versus Rates. If the subtotal reaches 25, then the shipping cost is 10; if it reaches 50, then the shipping cost is 12; and if it reaches 100, then the shipping cost is 15. So, we have inserted 25:10, 50:12, 100:15. If the customer tries to order more than the inserted total cost, then no shipping is activated. In this way, you can now clone the shipping modules and make changes to the logic as per your requirement. Database tables for feedback Let's begin by creating tables in the database. We all know that OpenCart has multistore abilities, supports multiple languages, and can be displayed in multiple layouts, so when creating a database we should take this into consideration. Four tables are created: feedback, feedback_description, feedback_to_layout, and feedback_to_store. A feedback table is created for saving feedback-related data, and feedback_description is created for storing multiple-language data for the feedback. A feedback_to_layout table is created for saving the association of layout to feedback, and a feedback_to_store table is created for saving the association of store to feedback. When we install OpenCart, we use oc_ as a database prefix as shown in the following image and query. A database prefix is defined in config.php where mostly you find something such as define('DB_PREFIX', 'oc_'); , or what you entered when installing OpenCart. We create the oc_feedback table. It saves the status, sort order, date added, and feedback ID. Then we create the oc_feedback_description table, where we will save the feedback writer's name, feedback given, and language ID, for multiple languages. Then we create the oc_feedback_to_store table to save the store ID and feedback ID and keep the relationship between feedback and whichever store's feedback is to be shown. Finally, we create the oc_feedback_to_layout table to save the feedback_id and layout_id to show the feedback for the layout you want. This diagram shows the database schema: Creating the template file for the frontend Go to catalog/view/theme/default/template and create a feedback folder. Then create a feedback.tpl file and insert this code: <?php echo $header; ?><div class="container"><ul class="breadcrumb"><?php foreach ($breadcrumbs as $breadcrumb) { ?><li><a href="<?php echo $breadcrumb['href']; ?>"><?php echo$breadcrumb['text']; ?></a></li><?php } ?></ul> The preceding code shows the breadcrumbs array. The following code shows the list of feedback: <div class="row"><?php echo $column_left; ?><?php if ($column_left && $column_right) { ?><?php $class = 'col-sm-6'; ?><?php } elseif ($column_left || $column_right) { ?><?php $class = 'col-sm-9'; ?><?php } else { ?><?php $class = 'col-sm-12'; ?><?php } ?><div id="content" class="<?php echo $class; ?>"><?php echo $content_top; ?><h1><?php echo $heading_title; ?></h1><?php foreach ($feedbacks as $feedback) { ?><div class="col-xs-12"><div class="row"><h4><?php echo $feedback['author']; ?></h4><p><?php echo $feedback['description']; ?></p><hr /></div></div><?php } ?> This shows the list of feedback authors and descriptions, as shown in the following screenshot: To show the pagination in the template file, we have to insert the following lines of code into whichever part we want to show the pagination in: <div class="row"><div class="col-sm-6 text-left"><?php echo $pagination; ?></div><div class="col-sm-6 text-right"><?php echo $results; ?></div></div> It shows the pagination in the template file, and we mostly show the pagination at the bottom, so paste it at the end of the feedback.tpl file: <?php if (!$feedbacks) { ?><p><?php echo $text_empty; ?></p><div class="buttons"><div class="pull-right"><a href="<?php echo $continue; ?>"class="btn btn-primary"><?php echo $button_continue; ?></a></div></div><?php } ?> If there are no feedbacks, then a message similar to There are no feedbacks to showis shown, as per the language file: <?php echo $content_bottom; ?></div><?php echo $column_right; ?></div></div><?php echo $footer; ?> In this way, the template file is completed and so is our feedback management. You can create a module and show it as a module as well. To view the list of feedback at the frontend, we have to use a link similar to http://www.example.com/index.php?route=feedback/feedback, and insert the link somewhere in the templates so that visitors will be able to see the feedback list. Like this, you can extend to show the feedback as a module, and create a form at the frontend from which visitors can submit feedback. You can find these at demo code. Try this first and check out the code if you need any help. We have made the code files as descriptive as possible. Summary Using OpenCart themes, you can customize the presentation layer of OpenCart. Likewise, if you can code OpenCart's extensions or modules, then you can also customize the functionality of the OpenCart e-commerce framework and make an e-commerce site easier to administer and look better Resources for Article: Further resources on this subject: OpenCart Themes: Using the jCarousel Plugin [Article] Implementing OpenCart Modules [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article]
Read more
  • 0
  • 0
  • 3188

article-image-building-untangle-game-canvas-and-drawing-api
Packt
06 Jul 2015
25 min read
Save for later

Building the Untangle Game with Canvas and the Drawing API

Packt
06 Jul 2015
25 min read
In this article by Makzan, the author of HTML5 Game Development by Example: Beginner's Guide - Second Edition has discussed the new highlighted feature in HTML5—the canvas element. We can treat it as a dynamic area where we can draw graphics and shapes with scripts. (For more resources related to this topic, see here.) Images in websites have been static for years. There are animated GIFs, but they cannot interact with visitors. Canvas is dynamic. We draw and modify the context in the Canvas, dynamically through the JavaScript drawing API. We can also add interaction to the Canvas and thus make games. In this article, we will focus on using new HTML5 features to create games. Also, we will take a look at a core feature, Canvas, and some basic drawing techniques. We will cover the following topics: Introducing the HTML5 canvas element Drawing a circle in Canvas Drawing lines in the canvas element Interacting with drawn objects in Canvas with mouse events The Untangle puzzle game is a game where players are given circles with some lines connecting them. The lines may intersect the others and the players need to drag the circles so that no line intersects anymore. The following screenshot previews the game that we are going to achieve through this article: You can also try the game at the following URL: http://makzan.net/html5-games/untangle-wip-dragging/ So let's start making our Canvas game from scratch. Drawing a circle in the Canvas Let's start our drawing in the Canvas from the basic shape—circle. Time for action – drawing color circles in the Canvas First, let's set up the new environment for the example. That is, an HTML file that will contain the canvas element, a jQuery library to help us in JavaScript, a JavaScript file containing the actual drawing logic, and a style sheet: index.html js/ js/jquery-2.1.3.js js/untangle.js js/untangle.drawing.js js/untangle.data.js js/untangle.input.js css/ css/untangle.css images/ Put the following HTML code into the index.html file. It is a basic HTML document containing the canvas element: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Drawing Circles in Canvas</title> <link rel="stylesheet" href="css/untangle.css"> </head> <body> <header>    <h1>Drawing in Canvas</h1> </header> <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> <script src="js/jquery-2.1.3.min.js"></script> <script src="js/untangle.data.js"></script> <script src="js/untangle.drawing.js"></script> <script src="js/untangle.input.js"></script> <script src="js/untangle.js"></script> </body> </html> Use CSS to set the background color of the Canvas inside untangle.css: canvas { background: grey; } In the untangle.js JavaScript file, we put a jQuery document ready function and draw a color circle inside it: $(document).ready(function(){ var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }); Open the index.html file in a web browser and we will get the following screenshot: What just happened? We have just created a simple Canvas context with circles on it. There are not many settings for the canvas element itself. We set the width and height of the Canvas, the same as we have fixed the dimensions of real drawing paper. Also, we assign an ID attribute to the Canvas for an easier reference in JavaScript: <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> Putting in fallback content when the web browser does not support the Canvas Not every web browser supports the canvas element. The canvas element provides an easy way to provide fallback content if the canvas element is not supported. The content also provides meaningful information for any screen reader too. Anything inside the open and close tags of the canvas element is the fallback content. This content is hidden if the web browser supports the element. Browsers that don't support canvas will instead display that fallback content. It is good practice to provide useful information in the fallback content. For instance, if the canvas tag's purpose is a dynamic picture, we may consider placing an <img> alternative there. Or we may also provide some links to modern web browsers for the visitor to upgrade their browser easily. The Canvas context When we draw in the Canvas, we actually call the drawing API of the canvas rendering context. You can think of the relationship of the Canvas and context as Canvas being the frame and context the real drawing surface. Currently, we have 2d, webgl, and webgl2 as the context options. In our example, we'll use the 2D drawing API by calling getContext("2d"). var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); Drawing circles and shapes with the Canvas arc function There is no circle function to draw a circle. The Canvas drawing API provides a function to draw different arcs, including the circle. The arc function accepts the following arguments: Arguments Discussion X The center point of the arc in the x axis. Y The center point of the arc in the y axis. radius The radius is the distance between the center point and the arc's perimeter. When drawing a circle, a larger radius means a larger circle. startAngle The starting point is an angle in radians. It defines where to start drawing the arc on the perimeter. endAngle The ending point is an angle in radians. The arc is drawn from the position of the starting angle, to this end angle. counter-clockwise This is a Boolean indicating the arc from startingAngle to endingAngle drawn in a clockwise or counter-clockwise direction. This is an optional argument with the default value false. Converting degrees to radians The angle arguments used in the arc function are in radians instead of degrees. If you are familiar with the degrees angle, you may need to convert the degrees into radians before putting the value into the arc function. We can convert the angle unit using the following formula: radians = p/180 x degrees Executing the path drawing in the Canvas When we are calling the arc function or other path drawing functions, we are not drawing the path immediately in the Canvas. Instead, we are adding it into a list of the paths. These paths will not be drawn until we execute the drawing command. There are two drawing executing commands: one command to fill the paths and the other to draw the stroke. We fill the paths by calling the fill function and draw the stroke of the paths by calling the stroke function, which we will use later when drawing lines: ctx.fill(); Beginning a path for each style The fill and stroke functions fill and draw the paths in the Canvas but do not clear the list of paths. Take the following code snippet as an example. After filling our circle with the color red, we add other circles and fill them with green. What happens to the code is both the circles are filled with green, instead of only the new circle being filled by green: var canvas = document.getElementById('game'); var ctx = canvas.getContext('2d'); ctx.fillStyle = "red"; ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.fill();   ctx.arc(210, 100, 50, 0, Math.PI*2, true); ctx.fillStyle = "green"; ctx.fill(); This is because, when calling the second fill command, the list of paths in the Canvas contains both circles. Therefore, the fill command fills both circles with green and overrides the red color circle. In order to fix this issue, we want to ensure we call beginPath before drawing a new shape every time. The beginPath function empties the list of paths, so the next time we call the fill and stroke commands, they will only apply to all paths after the last beginPath. Have a go hero We have just discussed a code snippet where we intended to draw two circles: one in red and the other in green. The code ends up drawing both circles in green. How can we add a beginPath command to the code so that it draws one red circle and one green circle correctly? Closing a path The closePath function will draw a straight line from the last point of the latest path to the first point of the path. This is called closing the path. If we are only going to fill the path and are not going to draw the stroke outline, the closePath function does not affect the result. The following screenshot compares the results on a half circle with one calling closePath and the other not calling closePath: Pop quiz Q1. Do we need to use the closePath function on the shape we are drawing if we just want to fill the color and not draw the outline stroke? Yes, we need to use the closePath function. No, it does not matter whether we use the closePath function. Wrapping the circle drawing in a function Drawing a circle is a common function that we will use a lot. It is better to create a function to draw a circle now instead of entering several code lines. Time for action – putting the circle drawing code into a function Let's make a function to draw the circle and then draw some circles in the Canvas. We are going to put code in different files to make the code simpler: Open the untangle.drawing.js file in our code editor and put in the following code: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.drawCircle = function(x, y, radius) { var ctx = untangleGame.ctx; ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(x, y, radius, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }; Open the untangle.data.js file and put the following code into it: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.drawCircle(x, y, circleRadius); } }; Then open the untangle.js file. Replace the original code in the JavaScript file with the following code: if (untangleGame === undefined) { var untangleGame = {}; }   // Entry point $(document).ready(function(){ var canvas = document.getElementById("game"); untangleGame.ctx = canvas.getContext("2d");   var width = canvas.width; var height = canvas.height;   untangleGame.createRandomCircles(width, height);   }); Open the HTML file in the web browser to see the result: What just happened? The code of drawing circles is executed after the page is loaded and ready. We used a loop to draw several circles in random places in the Canvas. Dividing code into files We are putting the code into different files. Currently, there are the untangle.js, untangle.drawing.js, and untangle.data.js files. The untangle.js is the entry point of the game. Then we put logic that is related to the context drawing into untangle.drawing.js and logic that's related to data manipulation into the untangle.data.js file. We use the untangleGame object as the global object that's being accessed across all the files. At the beginning of each JavaScript file, we have the following code to create this object if it does not exist: if (untangleGame === undefined) { var untangleGame = {}; } Generating random numbers in JavaScript In game development, we often use random functions. We may want to randomly summon a monster for the player to fight, we may want to randomly drop a reward when the player makes progress, and we may want a random number to be the result of rolling a dice. In this code, we place the circles randomly in the Canvas. To generate a random number in JavaScript, we use the Math.random() function. There is no argument in the random function. It always returns a floating number between 0 and 1. The number is equal or bigger than 0 and smaller than 1. There are two common ways to use the random function. One way is to generate random numbers within a given range. The other way is generating a true or false value. Usage Code Discussion Getting a random integer between A and B Math.floor(Math.random()*B)+A Math.floor() function cuts the decimal point of the given number. Take Math.floor(Math.random()*10)+5 as an example. Math.random() returns a decimal number between 0 to 0.9999…. Math.random()*10 is a decimal number between 0 to 9.9999…. Math.floor(Math.random()*10) is an integer between 0 to 9. Finally, Math.floor(Math.random()*10) + 5 is an integer between 5 to 14. Getting a random Boolean (Math.random() > 0.495) (Math.random() > 0.495) means 50 percent false and 50 percent true. We can further adjust the true/false ratio. (Math.random() > 0.7) means almost 70 percent false and 30 percent true. Saving the circle position When we are developing a DOM-based game, we often put the game objects into DIV elements and accessed them later in code logic. It is a different story in the Canvas-based game development. In order to access our game objects after they are drawn in the Canvas, we need to remember their states ourselves. Let's say now we want to know how many circles are drawn and where they are, and we will need an array to store their position. Time for action – saving the circle position Open the untangle.data.js file in the text editor. Add the following circle object definition code in the JavaScript file: untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; } Now we need an array to store the circles' positions. Add a new array to the untangleGame object: untangleGame.circles = []; While drawing every circle in the Canvas, we save the position of the circle in the circles array. Add the following line before calling the drawCircle function, inside the createRandomCircles function: untangleGame.circles.push(new untangleGame.Circle(x,y,circleRadius)); After the steps, we should have the following code in the untangle.data.js file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.circles = [];   untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; };   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.circles.push(new      untangleGame.Circle(x,y,circleRadius));    untangleGame.drawCircle(x, y, circleRadius); } }; Now we can test the code in the web browser. There is no visual difference between this code and the last example when drawing random circles in the Canvas. This is because we are saving the circles but have not changed any code that affects the appearance. We just make sure it looks the same and there are no new errors. What just happened? We saved the position and radius of each circle. This is because Canvas drawing is an immediate mode. We cannot directly access the object drawn in the Canvas because there is no such information. All lines and shapes are drawn on the Canvas as pixels and we cannot access the lines or shapes as individual objects. Imagine that we are drawing on a real canvas. We cannot just move a house in an oil painting, and in the same way we cannot directly manipulate any drawn items in the canvas element. Defining a basic class definition in JavaScript We can use object-oriented programming in JavaScript. We can define some object structures for our use. The Circle object provides a data structure for us to easily store a collection of x and y positions and the radii. After defining the Circle object, we can create a new Circle instance with an x, y, and radius value using the following code: var circle1 = new Circle(100, 200, 10); For more detailed usage on object-oriented programming in JavaScript, please check out the Mozilla Developer Center at the following link: https://developer.mozilla.org/en/Introduction_to_Object-Oriented_JavaScript Have a go hero We have drawn several circles randomly on the Canvas. They are in the same style and of the same size. How about we randomly draw the size of the circles? And fill the circles with different colors? Try modifying the code and then play with the drawing API. Drawing lines in the Canvas Now we have several circles here, so how about connecting them with lines? Let's draw a straight line between each circle. Time for action – drawing straight lines between each circle Open the index.html file we just used in the circle-drawing example. Change the wording in h1 from drawing circles in Canvas to drawing lines in Canvas. Open the untangle.data.js JavaScript file. We define a Line class to store the information that we need for each line: untangleGame.Line = function(startPoint, endPoint, thickness) { this.startPoint = startPoint; this.endPoint = endPoint; this.thickness = thickness; } Save the file and switch to the untangle.drawing.js file. We need two more variables. Add the following lines into the JavaScript file: untangleGame.thinLineThickness = 1; untangleGame.lines = []; We add the following drawLine function into our code, after the existing drawCircle function in the untangle.drawing.js file. untangleGame.drawLine = function(ctx, x1, y1, x2, y2, thickness) { ctx.beginPath(); ctx.moveTo(x1,y1); ctx.lineTo(x2,y2); ctx.lineWidth = thickness; ctx.strokeStyle = "#cfc"; ctx.stroke(); } Then we define a new function that iterates the circle list and draws a line between each pair of circles. Append the following code in the JavaScript file: untangleGame.connectCircles = function() { // connect the circles to each other with lines untangleGame.lines.length = 0; for (var i=0;i< untangleGame.circles.length;i++) {    var startPoint = untangleGame.circles[i];    for(var j=0;j<i;j++) {      var endPoint = untangleGame.circles[j];      untangleGame.drawLine(startPoint.x, startPoint.y,        endPoint.x,      endPoint.y, 1);      untangleGame.lines.push(new untangleGame.Line(startPoint,        endPoint,      untangleGame.thinLineThickness));    } } }; Finally, we open the untangle.js file, and add the following code before the end of the jQuery document ready function, after we have called the untangleGame.createRandomCircles function: untangleGame.connectCircles(); Test the code in the web browser. We should see there are lines connected to each randomly placed circle: What just happened? We have enhanced our code with lines connecting each generated circle. You may find a working example at the following URL: http://makzan.net/html5-games/untangle-wip-connect-lines/ Similar to the way we saved the circle position, we have an array to save every line segment we draw. We declare a line class definition to store some essential information of a line segment. That is, we save the start and end point and the thickness of the line. Introducing the line drawing API There are some drawing APIs for us to draw and style the line stroke: Line drawing functions Discussion moveTo The moveTo function is like holding a pen in our hand and moving it on top of the paper without touching it with the pen. lineTo This function is like putting the pen down on the paper and drawing a straight line to the destination point. lineWidth The lineWidth function sets the thickness of the strokes we draw afterwards. stroke The stroke function is used to execute the drawing. We set up a collection of moveTo, lineTo, or styling functions and finally call the stroke function to execute it on the Canvas. We usually draw lines by using the moveTo and lineTo pairs. Just like in the real world, we move our pen on top of the paper to the starting point of a line and put down the pen to draw a line. Then, keep on drawing another line or move to the other position before drawing. This is exactly the flow in which we draw lines on the Canvas. We just demonstrated how to draw a simple line. We can set different line styles to lines in the Canvas. For more details on line styling, please read the styling guide in W3C at http://www.w3.org/TR/2dcontext/#line-styles and the Mozilla Developer Center at https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors. Using mouse events to interact with objects drawn in the Canvas So far, we have shown that we can draw shapes in the Canvas dynamically based on our logic. There is one part missing in the game development, that is, the input. Now, imagine that we can drag the circles around on the Canvas, and the connected lines will follow the circles. In this section, we will add mouse events to the canvas to make our circles draggable. Time for action – dragging the circles in the Canvas Let's continue with our previous code. Open the html5games.untangle.js file. We need a function to clear all the drawings in the Canvas. Add the following function to the end of the untangle.drawing.js file: untangleGame.clear = function() { var ctx = untangleGame.ctx; ctx.clearRect(0,0,ctx.canvas.width,ctx.canvas.height); }; We also need two more functions that draw all known circles and lines. Append the following code to the untangle.drawing.js file: untangleGame.drawAllLines = function(){ // draw all remembered lines for(var i=0;i<untangleGame.lines.length;i++) {    var line = untangleGame.lines[i];    var startPoint = line.startPoint;    var endPoint = line.endPoint;    var thickness = line.thickness;    untangleGame.drawLine(startPoint.x, startPoint.y,      endPoint.x,    endPoint.y, thickness); } };   untangleGame.drawAllCircles = function() { // draw all remembered circles for(var i=0;i<untangleGame.circles.length;i++) {    var circle = untangleGame.circles[i];    untangleGame.drawCircle(circle.x, circle.y, circle.radius); } }; We are done with the untangle.drawing.js file. Let's switch to the untangle.js file. Inside the jQuery document-ready function, before the ending of the function, we add the following code, which creates a game loop to keep drawing the circles and lines: // set up an interval to loop the game loop setInterval(gameloop, 30);   function gameloop() { // clear the Canvas before re-drawing. untangleGame.clear(); untangleGame.drawAllLines(); untangleGame.drawAllCircles(); } Before moving on to the input handling code implementation, let's add the following code to the jQuery document ready function in the untangle.js file, which calls the handleInput function that we will define: untangleGame.handleInput(); It's time to implement our input handling logic. Switch to the untangle.input.js file and add the following code to the file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.handleInput = function(){ // Add Mouse Event Listener to canvas // we find if the mouse down position is on any circle // and set that circle as target dragging circle. $("#game").bind("mousedown", function(e) {    var canvasPosition = $(this).offset();    var mouseX = e.pageX - canvasPosition.left;    var mouseY = e.pageY - canvasPosition.top;      for(var i=0;i<untangleGame.circles.length;i++) {      var circleX = untangleGame.circles[i].x;      var circleY = untangleGame.circles[i].y;      var radius = untangleGame.circles[i].radius;      if (Math.pow(mouseX-circleX,2) + Math.pow(        mouseY-circleY,2) < Math.pow(radius,2)) {        untangleGame.targetCircleIndex = i;        break;      }    } });   // we move the target dragging circle // when the mouse is moving $("#game").bind("mousemove", function(e) {    if (untangleGame.targetCircleIndex !== undefined) {      var canvasPosition = $(this).offset();      var mouseX = e.pageX - canvasPosition.left;      var mouseY = e.pageY - canvasPosition.top;      var circle = untangleGame.circles[        untangleGame.targetCircleIndex];      circle.x = mouseX;      circle.y = mouseY;    }    untangleGame.connectCircles(); });   // We clear the dragging circle data when mouse is up $("#game").bind("mouseup", function(e) {    untangleGame.targetCircleIndex = undefined; }); }; Open index.html in a web browser. There should be five circles with lines connecting them. Try dragging the circles. The dragged circle will follow the mouse cursor and the connected lines will follow too. What just happened? We have set up three mouse event listeners. They are the mouse down, move, and up events. We also created the game loop, which updates the Canvas drawing based on the new position of the circles. You can view the example's current progress at: http://makzan.net/html5-games/untangle-wip-dragging-basic/. Detecting mouse events in circles in the Canvas After discussing the difference between DOM-based development and Canvas-based development, we cannot directly listen to the mouse events of any shapes drawn in the Canvas. There is no such thing. We cannot monitor the event in any shapes drawn in the Canvas. We can only get the mouse event of the canvas element and calculate the relative position of the Canvas. Then we change the states of the game objects according to the mouse's position and finally redraw it on the Canvas. How do we know we are clicking on a circle? We can use the point-in-circle formula. This is to check the distance between the center point of the circle and the mouse position. The mouse clicks on the circle when the distance is less than the circle's radius. We use this formula to get the distance between two points: Distance = (x2-x1)2 + (y2-y1)2. The following graph shows that when the distance between the center point and the mouse cursor is smaller than the radius, the cursor is in the circle: The following code we used explains how we can apply distance checking to know whether the mouse cursor is inside the circle in the mouse down event handler: if (Math.pow(mouseX-circleX,2) + Math.pow(mouseY-circleY,2) < Math.pow(radius,2)) { untangleGame.targetCircleIndex = i; break; } Please note that Math.pow is an expensive function that may hurt performance in some scenarios. If performance is a concern, we may use the bounding box collision checking. When we know that the mouse cursor is pressing the circle in the Canvas, we mark it as the targeted circle to be dragged on the mouse move event. During the mouse move event handler, we update the target dragged circle's position to the latest cursor position. When the mouse is up, we clear the target circle's reference. Pop quiz Q1. Can we directly access an already drawn shape in the Canvas? Yes No Q2. Which method can we use to check whether a point is inside a circle? The coordinate of the point is smaller than the coordinate of the center of the circle. The distance between the point and the center of the circle is smaller than the circle's radius. The x coordinate of the point is smaller than the circle's radius. The distance between the point and the center of the circle is bigger than the circle's radius. Game loop The game loop is used to redraw the Canvas to present the later game states. If we do not redraw the Canvas after changing the states, say the position of the circles, we will not see it. Clearing the Canvas When we drag the circle, we redraw the Canvas. The problem is the already drawn shapes on the Canvas won't disappear automatically. We will keep adding new paths to the Canvas and finally mess up everything in the Canvas. The following screenshot is what will happen if we keep dragging the circles without clearing the Canvas on every redraw: Since we have saved all game statuses in JavaScript, we can safely clear the entire Canvas and draw the updated lines and circles with the latest game status. To clear the Canvas, we use the clearRect function provided by Canvas drawing API. The clearRect function clears a rectangle area by providing a rectangle clipping region. It accepts the following arguments as the clipping region: context.clearRect(x, y, width, height) Argument Definition x The top left point of the rectangular clipping region, on the x axis. y The top left point of the rectangular clipping region, on the y axis. width The width of the rectangular region. height The height of the rectangular region. The x and y values set the top left position of the region to be cleared. The width and height values define how much area is to be cleared. To clear the entire Canvas, we can provide (0,0) as the top left position and the width and height of the Canvas to the clearRect function. The following code clears all things drawn on the entire Canvas: ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); Pop quiz Q1. Can we clear a portion of the Canvas by using the clearRect function? Yes No Q2. Does the following code clear things on the drawn Canvas? ctx.clearRect(0, 0, ctx.canvas.width, 0); Yes No Summary You learned a lot in this article about drawing shapes and creating interaction with the new HTML5 canvas element and the drawing API. Specifically, you learned to draw circles and lines in the Canvas. We added mouse events and touch dragging interaction with the paths drawn in the Canvas. Finally, we succeeded in developing the Untangle puzzle game. Resources for Article: Further resources on this subject: Improving the Snake Game [article] Playing with Particles [article] Making Money with Your Game [article]
Read more
  • 0
  • 0
  • 20900

article-image-building-your-first-bpm-application
Packt
06 Jul 2015
16 min read
Save for later

Building Your First BPM Application

Packt
06 Jul 2015
16 min read
In this article by Simone Fiorini and Arun V Gopalakrishnan, authors of the book Mastering jBPM6 , we will build our first BPM application by using the jBPM tool stack. This article will guide you through the following topics: Installing the jBPM tool stack Hacking the default installation configurations Modeling and deploying a jBPM project Embedding jBPM inside a standalone Java project This article gives you the hands-on flexibility of the jBPM tool stack and provides information on hacking the configuration and playing around. (For more resources related to this topic, see here.) Installing the jBPM tool stack A jBPM release comes with an installation zip file, which contains the essentials for the jBPM environment and tools for building a demo runtime for easy hands-on management of the jBPM runtime environment. For downloading jBPM: Go to http://jboss.org/jbpm | Download | Download jBPM 6.2.0.Final | jbpm-6.2.0.Final-installer-full.zip.Use the latest stable version. The content of the book follows the 6.2.0 release. Unzip and extract the installer content and you will find an install.html file that contains the helper documentation for installing a demo jBPM runtime with inbuilt projects. jBPM installation needs JDK 1.6+ to be installed and set as JAVA_HOME and the tooling for installation is done using ANT scripts (ANT version 1.7+). The tooling for installation is basically an ANT script, which is a straightforward method for installation and can be customized easily. To operate the tooling, the ANT script consists of the ANT targets that act as the commands for the tooling. The following figure will make it easy for you to understand the relevant ANT targets available in the script. Each box represents an ANT target and helps you to manage the environment. The basic targets available are for installing, starting, stopping, and cleaning the environment. To run the ANT target, install ANT 1.7+, navigate to the installer folder (by using the shell or the command line tool available in your OS), and run the target by using the following command: ant <targetname> The jBPM installer comes with a default demo environment, which uses a basic H2 database as its persistence storage. The persistence of jBPM is done using Hibernate; this makes it possible for jBPM to support an array of popular databases including the databases in the following list: Hibernate or Hibernate ORM is an object relational mapping framework and is used by jBPM to persist data to relation databases. For more details, see http://hibernate.org/. Databases Supported Details DB2 http://www-01.ibm.com/software/in/data/db2/ Apache Derby https://db.apache.org/derby/ H2 http://www.h2database.com/html/main.html HSQL Database Engine http://hsqldb.org/ MySQL https://www.mysql.com/ Oracle https://www.oracle.com/database/ PostgreSQL http://www.postgresql.org/ Microsoft SQL Server Database http://www.microsoft.com/en-in/server-cloud/products/sql-server/ For installing the demo, use the following command: ant install.demo The install command would install the web tooling and the Eclipse tooling, required for modeling and operating jBPM. ant start.demo This command will start the application server (JBoss) with the web tooling (the Kie workbench and dashboard) deployed in it and the eclipse tooling with all the plugins installed. That's it for the installation! Now, the JBoss application server should be running with the Kie workbench and dashboard builder deployed. You can now access the Kie workbench demo environment by using the URL and log in by using the demo admin user called admin and the password admin: http://localhost:8080/jbpm-console. Customizing the installation The demo installation is a sandbox environment, which allows for an easy installation and reduces time between you getting the release and being able to play around by using the stack. Even though it is very necessary, when you get the initial stuff done and get serious about jBPM, you may want to install a jBPM environment, which will be closer to a production environment. We can customize the installer for this purpose. The following sections will guide you through the options available for customization. Changing the database vendor The jBPM demo sandbox environment uses an embedded H2 database as the persistence storage. jBPM provides out of the box support for more widely used databases such as MySQL, PostgreSQL, and so on. Follow these steps to achieve a jBPM installation with these databases: Update the build.properties file available in the root folder of the installation to choose the required database instead of H2. By default, configurations for MySQL and PostgreSQL are available. For the support of other databases, check the hibernate documentation before configuring. Update db/jbpm-persistence-JPA2.xml, and update the hibernate.dialect property with an appropriate Hibernate dialect for our database vendor. Install the corresponding JDBC driver in the application server where we intend to deploy jBPM web tooling. Manually installing the database schema By default, the database schema is created automatically by using the Hibernate autogeneration capabilities. However, if we want to manually install the database schemas, the corresponding DDL scripts are available in dbddl-scripts for all major database vendors. Creating your first jBPM project jBPM provides a very structured way of creating a project. The structure considers application creation and maintenance for large organizations with multiple departments. This structure is recommended for use as it is a clean and secure way of manning the business process artifacts. The following image details the organization of a project in jBPM web tooling (or the Kie workbench). The jBPM workbench comes with an assumption of one business process management suite for an organization. An organization can have multiple organization units, which will internally contain multiple projects and form the root of the project, and as the name implies, it represents a fraction of an organization. This categorization can be visualized in any business organization and is sometimes referred as departments. In an ideal categorization, these organization units will be functionally different and thus, will contain different business processes. Using the workbench, we can create multiple organization units. The next categorization is the repository. A repository is a storage of business model artifacts such as business processes, business rules, and data models. A repository can be mapped to a functional classification within an organization, and multiple repositories can be set up if these repositories run multiple projects; the handling of these project artifacts have to be kept secluded from each other (for example, for security). Within a repository, we can create a project, and within a project, we can define and model business process artifacts. This structure and abstraction will be very useful to manage and maintain BPM-based applications. Let us go through the steps in detail now. After installation, you need to log into the Kie workbench. Now, as explained previously, we can create a project. Therefore, the first step is to create an organizational unit: Click through the menu bars, and go to Authoring | Administration | Organizational Units | Manage Organizational Units.This takes you to the Organizational Unit Manager screen; here, we can see a list of organizational units and repositories already present and their associations. Click Add to create an organizational unit, and give the name of the organization unit and the user who is in charge of administering the projects in the organization unit. Now, we can add a repository, navigate through the menus, and go to Authoring | Administration | Repositories | New Repository. Now, provide a name for the repository, choose the organization unit, and create the repository. Creating the repository results in (internally) creating a Git repository. The default location of the Git repository in the workbench is $WORKING_DIRECTORY/.niogit and can be modified by using the following system property: -Dorg.uberfire.nio.git.dir. Now, we can create a project for the organization unit. Go to Authoring | Project Authoring | Project Explorer. Now, choose your organization unit (here, Mastering-jBPM) from the bread crumb of project categorization. Click New Item and choose Project. Now, we can create a project by entering a relevant project name. This gives details like project name and a brief summary of the project, and more importantly, gives the group ID, artifact ID, and version ID for the project. Further, Finish the creation of new project. Those of you who know Maven and its artifact structure, will now have got an insight on how a project is built. Yes! The project created is a Maven module and is deployed as one. Business Process Modeling Therefore, we are ready to create our first business process model by using jBPM. Go to New Item | Business Process: Provide the name for the business process; here, we are trying to create a very primitive process as an example. Now, the workbench will show you the process modeler for modeling the business process. Click the zoom button in the toolbar, if you think you need more real estate for modeling Basically, the workbench can be divided into five parts: Toolbar (on the top): It gives you a large set of tools for visual modeling and saving the model. Object library (on the left side of the canvas): It gives you all the standard BPMN construct stencils, which you can drag and drop to create a model. Workspace (on the center): You get a workspace or canvas on which you can draw the process models. The canvas is very intuitive; if you click on an object, it shows a tool set surrounding it to draw the next one or guide to the next object. Properties (on the right side of the canvas): It gives the property values for all the attributes associated with the business process and each of its constructs. Problems (on the bottom): It gives you the errors on the business process that you are currently modeling. The validations are done on save, and we have provisions to have autosave options. Therefore, we can start modeling out first process. I assume the role of a business analyst who wants to model a simple process of content writing. This is a very simple process with just two tasks, one human task for writing and the other for reviewing. We can attach the actor associated with the task by going to the Properties panel and setting the actor. In this example, I have set it as admin, the default user, for the sake of simplicity. Now, we can save the project by using the Save button; it asks for a check-in comment, which provides the comment for this version of the process that we have just saved. Process modeling is a continuous process, and if properly used, the check-in comment can helps us to keep track on the objectives of process updates. Building and deploying the project Even though the project created is minuscular with just a sample project, this is fully functional! Yes, we have completed a business process, which will be very limited in functionality, but with its limited set of functionalities (if any), it can be deployed and operated. Go to Tools | Project Editor, and click Build & Deploy, as shown in the following screenshot: To see the deployment listed, go to Deploy | Deployments to see Deployment Units This shows the effectiveness of jBPM as a rapid application builder using a business process. We can create, model, and deploy a project within a span of minutes. Running your first process Here, we start the operation management using jBPM. Now, we assume the role of an operational employee. We have deployed a process and have to create a process instance and run it. Go to Process Management | Process Definitions. Click New Instance and start the process. This will start a process instance. Go to Process Management | Process Instances to view the process instance details and perform life cycle actions on process instances.The example writing process consists of two human tasks. Upon the start of the process instance, the Write task is assigned to the admin. The assigned task can be managed by going to the task management functionality. Go to Tasks | Tasks List: In Tasks List, we can view the details of the human tasks and perform human task life cycle operations such as assigning, delegating, completing, and aborting a task. Embedding jBPM in a standalone Java application The core engine of jBPM is a set of lightweight libraries, which can be embedded in any Java standalone application. This gives the enterprise architects the flexibility to include jBPM inside their existing application and leverage the functionalities of BPM. Modeling the business process using Eclipse tooling Upon running the installation script, jBPM installs the web tooling as well as the Eclipse tooling. The Eclipse tooling basically consists of the following: jBPM project wizard: It helps you to create a jBPM project easily jBPM runtime: An easy way of choosing the jBPM runtime version; this associates a set of libraries for the particular version of jBPM to the project BPMN Modeler: It is used to model the BPMN process Drools plugin: It gives you the debugging and operation management capabilities within Eclipse Creating a jBPM project using Eclipse The Eclipse web tooling is available in the installer root folder. Start Eclipse and create a new jBPM Maven project: Go to File | New Project | jBPM Project (Maven). Provide the project name and location details; now, the jBPM project wizard will do the following:     Create a default jBPM project for you with the entire initial configuration setup     Attach all runtime libraries     Create a sample project     Set up a unit testing environment for the business process The Eclipse workbench is considerably similar to the web tooling workbench. Similar to web tooling, it contains the toolbox, workspace, palette showing the BPMN construct stencils, and the property explorer. We can create a new BPMN process by going to New Project Wizard and selecting jBPM | BPMN2 Process. Give the process file name and click Finish; this will create a default BPMN2 template file. The BPMN2 modeler helps to visually model the process by dragging and dropping BPMN constructs from the palette and connecting them using the tool set. Deploying the process programmatically For deploying and running the business process programmatically, you have to follow these steps: KIE is the abbreviation for Knowledge Is Everything. Creating the knowledge base: Create the Kie Services, which is a hub giving access to the services provided by Kie:Using the Kie Service, create Kie Container, which is the container for the knowledge base: KieContainer kContainer = ks.getKieClasspathContainer(); Create and return the knowledge base with the input name: KieBase kbase = kContainer.getKieBase("kbase"); Creating a runtime manager: The runtime manger manages the runtime build with knowledge sessions and Task Service to create an executable environment for processes and user tasks.Create the JPA entity manager factory used for creating the persistence service, for communicating with the storage layer: EntityManagerFactory emf = Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa"); Create the runtime builder, which is the dsl style helper to create the runtime environment: RuntimeEnvironmentBuilder builder = RuntimeEnvironmentBuilder.Factory.get() .newDefaultBuilder().entityManagerFactory(emf) .knowledgeBase(kbase); Using the runtime environment, create the runtime manager: RuntimeManager RuntimeManager = RuntimeManagerFactory.Factory.get() .newSingletonRuntimeManager(builder.get(), "com.packt:introductory-sample:1.0"); Creating the runtime engine: Using the runtime manager, creates the runtime engine that is fully initialized and ready for operation: RuntimeEngine engine = manager.getRuntimeEngine(null); Starting the process: Using the runtime manager, create a knowledge session and start the process: KieSession ksession = engine.getKieSession(); ksession.startProcess("com.sample.bpmn.hello"); This creates and starts a process instance. From the runtime manager, we can also access the human task service and interact with its API. Go to Window | Show View | Others | Drools | Process Instances to view the created process instances: Writing automated test cases jBPM runtime comes with a test utility, which serves as the unit testing framework for automated test cases. The unit testing framework uses and extends the capabilities of the JUnit testing framework and basically provides the JUnit life cycle methods and the jBPM runtime environment for testing and tearing down the runtime manager after test execution. Helper methods manage the knowledge base and the knowledge session, getting workitem handlers and assertions to assert process instances and various stages. For creating a JUnit test case, create a class extending org.jbpm.test.JbpmJUnitBaseTestCase We can initialize the jBPM runtime by using the previous steps and assert using the helper methods provided by org.jbpm.test.JbpmJUnitBaseTestCase. For example, we assert the completion of a process as follows: assertProcessInstanceCompleted(processInstance.getId(), ksession); Change management – updating deployed process definitions We have modeled a business process and deployed it; the application end users will create process instances and fulfill their goals by using the business process. Now, as the organization evolves, we need a change in the process; for example, the organization has decided to add one more department. Therefore, we have to update the associated business processes. Technically, in jBPM, we cannot have an update in an already deployed process definition; we need to have a workaround. jBPM suggests three strategies for a process migration. Proceed: We will introduce the new process definition and retire the old definition. Retiring should be taken care of by the application so that all process instance calls for the process are redirected to the new process definition. Abort: The existing process is aborted, and we can restart the process instance with the updated process definition. We have to be very careful in this approach if the changes are not compatible with the state of the process instances. This can show abrupt behaviors depending on how complex your process definition is. Transfer: The process instance is migrated to the new process definition; that is, the states of the process instance and instances of activity should be mapped. The jBPM out-of-the-box support provides a generic process upgrade API, which can be used as an example. Summary This article would have given you the "Hello world" hands-on experience in jBPM. With your jBPM installation ready, we can now dive deep into the details of the functional components of jBPM. Resources for Article: Further resources on this subject: BPMS Components [article] Installing Activiti [article] Participating in a business process (Intermediate) [article]
Read more
  • 0
  • 0
  • 5476
article-image-getting-current-weather-forecast
Packt
06 Jul 2015
4 min read
Save for later

Getting the current weather forecast

Packt
06 Jul 2015
4 min read
In this article by Marco Schwartz, author of the book Intel Galileo Blueprints, we will configure Forecast.io. Forecast.io is a web API that returns weather forecasts of your exact location, and it updates them by the minute. The API has stunning maps, weather animations, temperature unit options, forecast lines, and more. (For more resources related to this topic, see here.) We will use this API to integrate global weather measurements with our local measurements. To do so, first go to the following link: http://forecast.io/ Then, look for the Forecast API link at the bottom of the page. It should be under the Developers section of the footer: You need to create your own Forecast.io account. You can do so by clicking on the Register button on the top right-hand side portion of the page. It will take you to the registration page where you need to provide an e-mail address and a strong password. Then, you will be required to get an API key, which will be displayed in the Forecast.io interface: Write this down somewhere, as you will need it soon. We will then use a Node.js module to get the forecast. The steps are described in more detail at the following link: https://github.com/mateodelnorte/forecast.io Next, you need to determine the latitude and longitude that you are currently in. Head on to the following link to generate it automatically: http://www.ip2location.com/ Then, we modify the main.js file: var Forecast = require('forecast.io'); var util = require('util'); This will set our API key with the one used/returned before: var options = { APIKey: 'your_api_key' }, forecast = new Forecast(options); Next, we'll define a new API route for the forecast. This is also where you need to put your longitude and latitude: app.get('/api/forecast', function(req, res) { forecast.get('latitude', 'longitude', function (err, result, data) {    if (err) throw err;    console.log('data: ' + util.inspect(data));    res.json(data); }); }); We will also modify the Interface.jade file with a new container: h3.row .col-md-4    div Forecast .col-md-4    div#summary Summary In the JavaScript file, we will refresh the field in the interface. We simply get the summary of the current weather conditions: $.get('/api/forecast', function(json_data) {      $('#summary').html('Summary: ' + json_data.currently.summary);    }); Again, the complete code can be found at the following link: https://github.com/marcoschwartz/galileo-blueprints-book After downloading, you can now build and upload the application to the board. Next comes the fun part, as we will test our creation. Go to the IP address of your board with port 3000: http://192.168.1.103:3000/ You will be able to see the interface as follows: Congratulations! You have been able to retrieve the data from Forecast.io and display it in your browser. If you're not getting the expected result, don't worry. You can go back and check everything. Ensure that you have downloaded and installed the correct software on your board. Also ensure that you correctly entered your API key in the application. Of course, you can modify your own interface as you wish. You can also add more fields from the answer response of Forecast.io. For instance, you can add a Fahrenheit measurement counterpart. Alternately, you can even add forecasts for the next hour and for the next 24 hours. Just ensure that you check the Forecast.io documentation for all the fields that you wish to use. Summary In this article, we configured Forecast.io that is a web API that returns weather forecasts of your exact location and it updates the same every minute. Resources for Article: Further resources on this subject: Controlling DC motors using a shield [article] Getting Started with Intel Galileo [article] Dealing with Interrupts [article]
Read more
  • 0
  • 0
  • 8062

article-image-aws-global-infrastructure
Packt
06 Jul 2015
5 min read
Save for later

AWS Global Infrastructure

Packt
06 Jul 2015
5 min read
In this article by Uchit Vyas, who is the author of the Mastering AWS Development book, we will see how to use AWS services in detail. It is important to have a choice of placing applications as close as possible to your users or customers, in order to ensure the lowest possible latency and best user experience while deploying them. AWS offers a choice of nine regions located all over the world (for example, East Coast of the United States, West Coast of the United States, Europe, Tokyo, Singapore, Sydney, and Brazil), 26 redundant Availability Zones, and 53 Amazon CloudFront points of presence. (For more resources related to this topic, see here.) It is very crucial and important to have the option to put applications as close as possible to your customers and end users by ensuring the best possible lowest latency and user-expected features and experience, when you are creating and deploying apps for performance. For this, AWS provides worldwide means to the regions located all over the world. To be specific via name and location, they are as follows: US East (Northern Virginia) region US West (Oregon) region US West (Northern California) region EU (Ireland) Region Asia Pacific (Singapore) region Asia Pacific (Sydney) region Asia Pacific (Tokyo) region South America (Sao Paulo) region US GovCloud In addition to regions, AWS has 25 redundant Availability Zones and 51 Amazon CloudFront points of presence. Apart from these infrastructure-level highlights, they have plenty of managed services that can be the cream of AWS candy bar! The managed services bucket has the following listed services: Security: For every organization, security in each and every aspect is the vital element. For that, AWS has several remarkable security features that distinguishes AWS from other Cloud providers as follows : Certifications and accreditations Identity and Access Management Right now, I am just underlining the very important security features. Global infrastructure: AWS provides a fully-functional, flexible technology infrastructure platform worldwide, with managed services over the globe with certain characteristics, for example: Multiple global locations for deployment Low-latency CDN service Reliable, low-latency DNS service Compute: AWS offers a huge range of various cloud-based core computing services (including variety of compute instances that can be auto scaled to justify the needs of your users and application), a managed elastic load balancing service, and more of fully managed desktop resources on the pathway of cloud. Some of the common characteristics of computer services include the following: Broad choice of resizable compute instances Flexible pricing opportunities Great discounts for always on compute resources Lower hourly rates for elastic workloads Wide-ranging networking configuration selections A widespread choice of operating systems Virtual desktops Save further as you grow with tiered pricing model Storage: AWS offers low cost with high durability and availability with their storage services. With pay-as-you-go pricing model with no commitment, provides more flexibility and agility in services and processes for storage with a highly secured environment. AWS provides storage solutions and services for backup, archive, disaster recovery, and so on. They also support block, file, and object kind of storages with a highly available and flexible infrastructure. A few major characteristics for storage are as follows: Cost-effective, high-scale storage varieties Data protection and data management Storage gateway Choice of instance storage options Content delivery and networking: AWS offers a wide set of networking services that enables you to create a logical isolated network that the architect defines and to create a private network connection to the AWS infrastructure, with fault tolerant, scalable, and highly available DNS service. It also provides delivery services for content to your end users, by very low latency and high data transfer speed with AWS CDN service. A few major characteristics for content delivery and networking are as follows: Application and media files delivery Software and large file distribution Private content Databases: AWS offers fully managed, distributed relational and NoSQL type of database services. Moreover, database services are capable of in-memory caching, sharding, and scaling with/without data warehouse solutions. A few major characteristics for databases are as follows: RDS DynamoDB Redshift ElastiCache Application services: AWS provides a variety of managed application services with lower cost such as application streaming and queuing, transcoding, push notification, searching, and so on. A few major characteristics for databases are as follows: AppStream CloudSearch Elastic transcoder SWF, SES, SNS, SQS Deployment and management: AWS offers management of credentials to explore AWS services such as monitor services, application services, and updating stacks of AWS resources. They also have deployment and security services alongside with AWS API activity. A few major characteristics for deployment and management services are as follows: IAM CloudWatch Elastic Beanstalk CloudFormation Data Pipeline OpsWorks CloudHSM Cloud Trail Summary There are a few more additional important services from AWS, such as support, integration with existing infrastructure, Big Data, and ecosystem, which puts it on the top of other infrastructure providers. As a cloud architect, it is necessary to learn about cloud service offerings and their all-important functionalities. Resources for Article: Further resources on this subject: Amazon DynamoDB - Modelling relationships, Error handling [article] Managing Microsoft Cloud [article] Amazon Web Services [article]
Read more
  • 0
  • 0
  • 15051

article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,bill@williamrice.net,Bill,Binky,open-enrollmentmoodlers moodler_2,rose@williamrice.net,Rose,Krial,open-enrollmentmoodlers moodler_3,jeff@williamrice.net,Jeff,Marco,open-enrollmentmoodlers moodler_4,dave@williamrice.net,Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,bill@williamrice.net,Bill,Binky,history101,odds,science101 moodler_2,rose@williamrice.net,Rose,Krial,history101,even,science101 moodler_3,jeff@williamrice.net,Jeff,Marco,history101,odds,science101 moodler_4,dave@williamrice.net,Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 10261
article-image-style-management-qgis
Packt
06 Jul 2015
11 min read
Save for later

Style Management in QGIS

Packt
06 Jul 2015
11 min read
In this article by Alexander Bruy and Daria Svidzinska, authors of the book QGIS By Example, you will learn how to work with styles, including saving and loading them, using different styles, and working with the Style Manager. (For more resources related to this topic, see here.) Working with styles In QGIS, a style is a way of cartographic visualization that takes into account a layer’s individual and thematic features. It encompasses basic characteristics of symbology, such as the color and presence of fill, outline parameters, the use of markers, scale-dependent rendering, layer transparency, and interactions with other layers. Style incorporates not only rendering appearance, but also other things, such as labeling settings. A well-chosen style greatly simplifies data perception and readability, so it is important to learn how to work with styles to be able to represent your data the best way. Styling is an important part of data preparation, and QGIS provides many handy features that make this process much more productive and easier. Let’s look at some of them! Saving and loading styles Creating good-looking styles can be a time-consuming task, but the good thing is that once developed styles don’t go to nowhere. You can save them for further use in other projects. When you have finished polishing your style, it is wise to save it. Usually, this is done from the Layer Properties dialog, which can be opened from the layer's context menu. There is a Style button at the bottom of this dialog. It provides access to almost all actions that can be performed with the layer's style, including saving, loading, making the style default, and so on. The style can be saved to the file on the disk (this works for any layer type), or stored in the database (possible only for database-based layers). To save style in the file, perform these steps: Open the Layer Properties dialog. Click on the Style button at the bottom of the Properties dialog and go to the Save Style submenu: Choose one of the available formats. A standard file dialog will open. Navigate to the desired location in your filesystem and save the style. Currently QGIS provides support for the following formats of saving styles: QGIS style file: The style is saved as a .qml file, which is a native QGIS format used to store symbology definition and other layer properties. Style Layer Descriptor (SLD) file: The style is exported to a .sld file. SLD format is widely used in web cartography, for example, by applications such as GeoServer. It is necessary to mention that currently, SLD support in QGIS is a bit limited. Also, you should remember that while you can save any style (or renderer type) in SLD, during import, you will get either a rule-based or a single-symbol renderer. If you work with the spatial database, you may want to save layer styles in the same database, together with layers. Such a feature is very useful in corporate environments, as it allows you to assign multiple styles for a single layer and easily keep the styles in sync. Saving styles in the database currently works only for PostgreSQL and SpatiaLite. To save style in the database, follow these steps: Open the Layer Properties dialog. Click on the Style button at the bottom of the Properties dialog and go to the Save Style submenu. Select the Save in database (format) item, where format can be spatialite or postgres, depending on the database type: The Save style in database dialog opens. Enter the style name and (optional) description in the corresponding fields, and click on the OK button to save the style: The saved style can be loaded and applied to the layer. To load a style from the file, use these steps: Open the Layer Properties dialog from the context menu. Click on the Style button at the bottom of the Properties dialog and select the Load Style item. Choose the style file to load. Loading a style from the database is a bit different: Open the Layer Properties dialog from the context menu. Click on the Style button at the bottom of the Properties dialog and go to Load Style | From database. The Load style from database dialog opens. Select the style you want to load and click on the Load Style button. With all of these options, we can easily save styles in the format that meets our requirements and tasks. Copy and paste styles Very often, you need to apply mostly the same style with really minor differences to multiple layers. There are several ways of doing this. First, you can save the style (as described in the previous section) in one of the supported formats, and then apply this saved style to another layer and edit it. But there is simpler way. Starting from QGIS 1.8, you can easily copy and paste styles between layers. To copy a style from one layer to another, perform these steps: In the QGIS layer tree, select the source layer from which you want to copy the style. Right-click to open the context menu. Go to Styles | Copy Style to copy the style of the source layer to the clipboard. Now, in the QGIS layer tree, select the target layer. Right-click to open its context menu. Go to Styles | Paster Style to paste the previously copied style from the clipboard and apply it to the target layer. It is important to note that QGIS allows you to copy, for example, a polygonal style and apply it to a point or line layer. This may lead to incorrect layer rendering, or the layer can even disappear from the map even though it still present in the layer tree. Instead of using the layer context menu to copy and paste styles, you can use the QGIS main menu. Both of these actions (Copy Style and Paste Style) can be found in the Layer menu. The copied style can be pasted in a text editor. Just copy the style using the context menu or QGIS main menu, open the text editor, and press Ctrl + V (or another shortcut used in your system to paste data from the clipboard) to paste the style. Now you can study it. Also, with this feature, you can apply the same style to multiple layers at once. Copy the style as previously described. Then select in the QGIS layer tree all the layers that you want to style (use the Ctrl key and/or the Shift key to select multiple layers). When all the desired layers are selected, go to Layer | Paste Style. Voilà! Now the style is applied to all selected layers. Using multiple styles per layer Sometimes, you may need to show the same data with different styles on the map. The most common and well-known solution to do this is to duplicate the layer in the QGIS layer tree and change the symbology. QGIS 2.8 allows us to achieve the same result in a simpler and more elegant way. Now we can define multiple styles for a single layer and easily switch between them when necessary. This functionality is available from the layer context menu and the layer Properties dialog. By default, all layers have only one style, called default. To create an additional style, use these steps: Select the layer in the layer tree. Right-click to open the context menu. Go to Styles | Add. The New style dialog opens. Enter the name of the new style and click on OK: A new style will be added and become active. It is worth mentioning that after adding a new style, the layer's appearance will remain the same, as the new style inherits all the properties of the active style. Adjust the symbology and other settings according to your needs. These changes will affect only the current style; previously created styles will remain unchanged. You can add as many styles as you want. All available styles are listed in the layer context menu, at the bottom of the Styles submenu. The current (or active) style is marked with a checkbox. To switch to another style, just select its name in the menu. If necessary, you can rename the active style (go to Styles | Rename Current) or remove it (go to Styles | Remove Current). Also, the current style can be saved and copied as previously described. Moreover, it is worth mentioning that multiple styles are supported by the QGIS server. The available layer styles are displayed via the GetCapabilities response, and the user can request them in, for example, the GetMap request. This handy feature also works in the Print Composer. Using Style manager Style manager provides extended capabilities for symbology management, allowing the user to save developed symbols; tag and merge them into thematic groups; and edit, delete, import, or export ready-to-use predefined symbology sets. If you created a symbol and want it to be available for further use and management, you should first save it to the Symbol library by following these steps: In the Style section of the layer Properties window, click on the Save button underneath the symbol preview window. In the Symbol name window, type a name for the new symbol and click on OK: After that, the symbol will appear and become available from the symbol presets in the right part of the window. It will also become available for the Style Manager. The Style Manger window can be opened by: Clicking on the Open library button after going to Properties | Style Going to Settings | Style Manager The window consists of three sections: In the left section, you can see a tree view of the available thematic symbology groups (which, by default, don't contain any user-specified groups) In the right part, there are symbols grouped on these tabs: Marker (for point symbols), Line (for line symbols), Fill (for polygon symbols), and Color ramp (for gradient symbols). If you double-click on any symbol on these tabs, the Symbol selector window will be opened, where you can change any available symbol properties (Symbol layer type, Size, Fill and Outline colors, and so on). Similarly, you can use the Edit button to change the appearance of the symbol. The bottom section of the window contains symbol management buttons—Add, Remove, Edit, and Share—for groups and their items. Let's create a thematic group called Hydrology. It will include symbology for hydrological objects, whether they are linear (river, canal, and so on) or polygonal (lake, water area, and so on). For this, perform the following steps: Highlight the groups item in the left section of the Style manager window and click on the very first + button. When the New group appears, type the Hydrology name. Now you need to add some symbols to the newly emerged group. There are two approaches to doing this: Right-click on any symbol (or several by holding down the Ctrl key) you want to add, and select from its contextual shortcut Apply Group | Hydrology. Alternatively, highlight the Hydrology group in the groups tree, and from the button below, select Group Symbols, as shown in this screenshot: As a result, checkboxes will appear beside the symbols, and you can toggle them to add the necessary symbol (or symbols) to the group. After you have clicked on the Close button, the symbols will be added to the group. Once the group is created, you can use it for quick access to the necessary symbology by going to Properties | Style | Symbols in group, as shown in the following screenshot: Note that you can combine the symbology for different symbology types within a single group (Marker, Line, Fill, and Color ramp), but when you upload symbols in this group for a specific layer, the symbols will be filtered according to the layer geometry type (for example, Fill for the polygon layer type). Another available option is to create a so-called Smart Group, where you can flexibly combine various conditions to merge symbols into meaningful groups. As an example, the following screenshot shows how we can create a wider Water group that includes symbols that are not only present in Hydrology already, but are also tagged as blue: Use the Share button to Export or Import selected symbols from external sources. Summary This article introduced the different aspects of style management in QGIS: saving and loading styles, copying and pasting styles, and using multiple styles per layer. Resources for Article: Further resources on this subject: How Vector Features are Displayed [article] Geocoding Address-based Data [article] Editing attributes [article]
Read more
  • 0
  • 0
  • 33960

Packt
06 Jul 2015
8 min read
Save for later

CoreOS – Overview and Installation

Packt
06 Jul 2015
8 min read
In this article by Rimantas Mocevicius, author of the book CoreOS Essentials, has described CoreOS is often as Linux for massive server deployments, but it can also run easily as a single host on bare-metal, cloud servers, and as a virtual machine on your computer as well. It is designed to run application containers as docker and rkt, and you will learn about its main features later in this article. This article is a practical, example-driven guide to help you learn about the essentials of the CoreOS Linux operating system. We assume that you have experience with VirtualBox, Vagrant, Git, Bash shell scripting and the command line (terminal on UNIX-like computers), and you have already installed VirtualBox, Vagrant, and git on your Mac OS X or Linux computer. As for a cloud installation, we will use Google Cloud's Compute Engine instances. By the end of this article, you will hopefully be familiar with setting up CoreOS on your laptop or desktop, and on the cloud. You will learn how to set up a local computer development machine and a cluster on a local computer and in the cloud. Also, we will cover etcd, systemd, fleet, cluster management, deployment setup, and production clusters. In this article, you will learn how CoreOS works and how to carry out a basic CoreOS installation on your laptop or desktop with the help of VirtualBox and Vagrant. We will basically cover two topics in this article: An overview of CoreOS Installing the CoreOS virtual machine (For more resources related to this topic, see here.) An overview of CoreOS CoreOS is a minimal Linux operation system built to run docker and rkt containers (application containers). By default, it is designed to build powerful and easily manageable server clusters. It provides automatic, very reliable, and stable updates to all machines, which takes away a big maintenance headache from sysadmins. And, by running everything in application containers, such setup allows you to very easily scale servers and applications, replace faulty servers in a fraction of a second, and so on. How CoreOS works CoreOS has no package manager, so everything needs to be installed and used via docker containers. Moreover, it is 40 percent more efficient in RAM usage than an average Linux installation, as shown in this diagram: CoreOS utilizes an active/passive dual-partition scheme to update itself as a single unit, instead of using a package-by-package method. Its root partition is read-only and changes only when an update is applied. If the update is unsuccessful during reboot time, then it rolls back to the previous boot partition. The following image shows OS updated gets applied to partition B (passive) and after reboot it becomes the active to boot from. The docker and rkt containers run as applications on CoreOS. Containers can provide very good flexibility for application packaging and can start very quickly—in a matter of milliseconds. The following image shows the simplicity of CoreOS. Bottom part is Linux OS, the second level is etcd/fleet with docker daemon and the top level are running containers on the server. By default, CoreOS is designed to work in a clustered form, but it also works very well as a single host. It is very easy to control and run application containers across cluster machines with fleet and use the etcd service discovery to connect them as it shown in the following image. CoreOS can be deployed easily on all major cloud providers, for example, Google Cloud, Amazon Web Services, Digital Ocean, and so on. It runs very well on bare-metal servers as well. Moreover, it can be easily installed on a laptop or desktop with Linux, Mac OS X, or Windows via Vagrant, with VirtualBox or VMware virtual machine support. This short overview should throw some light on what CoreOS is about and what it can do. Let's now move on to the real stuff and install CoreOS on to our laptop or desktop machine. Installing the CoreOS virtual machine To use the CoreOS virtual machine, you need to have VirtualBox, Vagrant, and git installed on your computer. In the following examples, we will install CoreOS on our local computer, which will serve as a virtual machine on VirtualBox. Okay, let's get started! Cloning the coreos-vagrant GitHub project Let‘s clone this project and get it running. In your terminal (from now on, we will use just the terminal phrase and use $ to label the terminal prompt), type the following command: $ git clone https://github.com/coreos/coreos-vagrant/ This will clone from the GitHub repository to the coreos-vagrant folder on your computer. Working with cloud-config To start even a single host, we need to provide some config parameters in the cloud-config format via the user data file. In your terminal, type this: $ cd coreos-vagrant$ mv user-data.sample user-data The user data should have content like this (the coreos-vagrant Github repository is constantly changing, so you might see a bit of different content when you clone the repository): #cloud-config coreos: etcd2:    #generate a new token for each unique cluster from “     https://discovery.etcd.io/new    #discovery: https://discovery.etcd.io/<token>    # multi-region and multi-cloud deployments need to use “     $public_ipv4    advertise-client-urls: http://$public_ipv4:2379    initial-advertise-peer-urls: http://$private_ipv4:2380    # listen on both the official ports and the legacy ports    # legacy ports can be omitted if your application doesn‘t “     depend on them    listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001    listen-peer-urls: “     http://$private_ipv4:2380,http://$private_ipv4:7001 fleet:    public-ip: $public_ipv4 flannel:    interface: $public_ipv4 units:    - name: etcd2.service      command: start    - name: fleet.service      command: start    - name: docker-tcp.socket      command: start      enable: true      content: |        [Unit]        Description=Docker Socket for the API          [Socket]        ListenStream=2375        Service=docker.service        BindIPv6Only=both        [Install]        WantedBy=sockets.target Replace the text between the etcd2: and fleet: lines to look this: etcd2:    name: core-01    initial-advertise-peer-urls: http://$private_ipv4:2380    listen-peer-urls: “     http://$private_ipv4:2380,http://$private_ipv4:7001    initial-cluster-token: core-01_etcd    initial-cluster: core-01=http://$private_ipv4:2380    initial-cluster-state: new    advertise-client-urls: “     http://$public_ipv4:2379,http://$public_ipv4:4001    listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 fleet: You can also download the latest user-data file from https://github.com/rimusz/coreos-essentials-book/blob/master/Chapter1/user-data. This should be enough to bootstrap a single-host CoreOS VM with etcd, fleet, and docker running there. Startup and SSH It's now time to boot our CoreOS VM and log in to its console using ssh. Let's boot our first CoreOS VM host. To do so, using the terminal, type the following command: $ vagrant up This will trigger vagrant to download the latest CoreOS alpha (this is the default channel set in the config.rb file, and it can easily be changed to beta, or stable) channel image and the lunch VM instance. You should see something like this as the output in your terminal: CoreOS VM has booted up, so let's open the ssh connection to our new VM using the following command: $ vagrant ssh It should show something like this: CoreOS alpha (some version) core@core-01 ~ $ Perfect! Let's verify that etcd, fleet, and docker are running there. Here are the commands required and the corresponding screenshots of the output: $ systemctl status etcd2 To check the status of fleet, type this: $ systemctl status fleet To check the status of docker, type the following command: $ docker version Lovely! Everything looks fine. Thus, we've got our first CoreOS VM up and running in VirtualBox. Summary In this article, we saw what CoreOS is and how it is installed. We covered a simple CoreOS installation on a local computer with the help of Vagrant and VirtualBox, and checked whether etcd, fleet, and docker are running there. Resources for Article: Further resources on this subject: Core Data iOS: Designing a Data Model and Building Data Objects [article] Clustering [article] Deploying a Play application on CoreOS and Docker [article]
Read more
  • 0
  • 0
  • 1771
Modal Close icon
Modal Close icon