Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-network-and-data-management-containers
Packt
10 Jun 2015
14 min read
Save for later

Network and Data Management for Containers

Packt
10 Jun 2015
14 min read
In this article by Neependra Khare author of the book Docker Cookbook, when the Docker daemon starts, it creates a virtual Ethernet bridge with the name docker0. For example, we will see the following with the ip addr command on the system that runs the Docker daemon: (For more resources related to this topic, see here.) As we can see, docker0 has the IP address 172.17.42.1/16. Docker randomly chooses an address and subnet from a private range defined in RFC 1918 (https://tools.ietf.org/html/rfc1918). Using this bridged interface, containers can communicate with each other and with the host system. By default, every time Docker starts a container, it creates a pair of virtual interfaces, one end of which is attached to the host system and other end to the created container. Let's start a container and see what happens: The end that is attached to the eth0 interface of the container gets the 172.17.0.1/16 IP address. We also see the following entry for the other end of the interface on the host system: Now, let's create a few more containers and look at the docker0 bridge with the brctl command, which manages Ethernet bridges: Every veth* binds to the docker0 bridge, which creates a virtual subnet shared between the host and every Docker container. Apart from setting up the docker0 bridge, Docker creates IPtables NAT rules, such that all containers can talk to the external world by default but not the other way around. Let's look at the NAT rules on the Docker host: If we try to connect to the external world from a container, we will have to go through the Docker bridge that was created by default: When starting a container, we have a few modes to select its networking: --net=bridge: This is the default mode that we just saw. So, the preceding command that we used to start the container can be written as follows: $ docker run -i -t --net=bridge centos /bin/bash --net=host: With this option, Docker does not create a network namespace for the container; instead, the container will network stack with the host. So, we can start the container with this option as follows: $ docker run -i -t --net=host centos bash We can then run the ip addr command within the container as seen here: We can see all the network devices attached to the host. An example of using such a configuration is to run the nginx reverse proxy within a container to serve the web applications running on the host. --net=container:NAME_or_ID: With this option, Docker does not create a new network namespace while starting the container but shares it from another container. Let's start the first container and look for its IP address: $ docker run -i -t --name=centos centos bash Now start another as follows: $ docker run -i -t --net=container:centos ubuntu bash As we can see, both containers contain the same IP address. Containers in a Kubernetes (http://kubernetes.io/) Pod use this trick to connect with each other. --net=none: With this option, Docker creates the network namespace inside the container but does not configure networking. For more information about the different networking, visit https://docs.docker.com/articles/networking/#how-docker-networks-a-container. From Docker 1.2 onwards, it is also possible to change /etc/host, /etc/hostname, and /etc/resolv.conf on a running container. However, note that these are just used to run a container. If it restarts, we will have to make the changes again. So far, we have looked at networking on a single host, but in the real world, we would like to connect multiple hosts and have a container from one host to talk to a container from another host. Flannel (https://github.com/coreos/flannel), Weave (https://github.com/weaveworks/weave), Calio (http://www.projectcalico.org/getting-started/docker/), and Socketplane (http://socketplane.io/) are some solutions that offer this functionality Socketplane joined Docker Inc in March '15. Community and Docker are building a Container Network Model (CNM) with libnetwork (https://github.com/docker/libnetwork), which provides a native Go implementation to connect containers. More information on this development can be found at http://blog.docker.com/2015/04/docker-networking-takes-a-step-in-the-right-direction-2/. Accessing containers from outside Once the container is up, we would like to access it from outside. If you have started the container with the --net=host option, then it can be accessed through the Docker host IP. With --net=none, you can attach the network interface from the public end or through other complex settings. Let's see what happens in by default—where packets are forwarded from the host network interface to the container. Getting ready Make sure the Docker daemon is running on the host and you can connect through the Docker client. How to do it… Let's start a container with the -P option: $ docker run --expose 80 -i -d -P --name f20 fedora /bin/bash This automatically maps any network port of the container to a random high port of the Docker host between 49000 to 49900. In the PORTS section, we see 0.0.0.0:49159->80/tcp, which is of the following form: <Host Interface>:<Host Port> -> <Container Interface>/<protocol> So, in case any request comes on port 49159 from any interface on the Docker host, the request will be forwarded to port 80 of the centos1 container. We can also map a specific port of the container to the specific port of the host using the -p option: $ docker run -i -d -p 5000:22 --name centos2 centos /bin/bash In this case, all requests coming on port 5000 from any interface on the Docker host will be forwarded to port 22 of the centos2 container. How it works… With the default configuration, Docker sets up the firewall rule to forward the connection from the host to the container and enables IP forwarding on the Docker host: As we can see from the preceding example, a DNAT rule has been set up to forward all traffic on port 5000 of the host to port 22 of the container. There's more… By default, with the -p option, Docker will forward all the requests coming to any interface to the host. To bind to a specific interface, we can specify something like the following: $ docker run -i -d -p 192.168.1.10:5000:22 --name f20 fedora /bin/bash In this case, only requests coming to port 5000 on the interface that has the IP 192.168.1.10 on the Docker host will be forwarded to port 22 of the f20 container. To map port 22 of the container to the dynamic port of the host, we can run following command: $ docker run -i -d -p 192.168.1.10::22 --name f20 fedora /bin/bash We can bind multiple ports on containers to ports on hosts as follows: $ docker run -d -i -p 5000:22 -p 8080:80 --name f20 fedora /bin/bash We can look up the public-facing port that is mapped to the container's port as follows: $ docker port f20 80 0.0.0.0:8080 To look at all the network settings of a container, we can run the following command: $ docker inspect   -f "{{ .NetworkSettings }}" f20 See also Networking documentation on the Docker website at https://docs.docker.com/articles/networking/. Managing data in containers Any uncommitted data or changes in containers get lost as soon as containers are deleted. For example, if you have configured the Docker registry in a container and pushed some images, as soon as the registry container is deleted, all of those images will get lost if you have not committed them. Even if you commit, it is not the best practice. We should try to keep containers as light as possible. The following are two primary ways to manage data with Docker: Data volumes: From the Docker documentation (https://docs.docker.com/userguide/dockervolumes/), a data volume is a specially-designated directory within one or more containers that bypasses the Union filesystem to provide several useful features for persistent or shared data: Volumes are initialized when a container is created. If the container's base image contains data at the specified mount point, that data is copied into the new volume. Data volumes can be shared and reused between containers. Changes to a data volume are made directly. Changes to a data volume will not be included when you update an image. Volumes persist until no containers use them. Data volume containers: As a volume persists until no container uses it, we can use the volume to share persistent data between containers. So, we can create a named volume container and mount the data to another container. Getting ready Make sure that the Docker daemon is running on the host and you can connect through the Docker client. How to do it... Add a data volume. With the -v option with the docker run command, we add a data volume to the container: $ docker run -t -d -P -v /data --name f20 fedora /bin/bash We can have multiple data volumes within a container, which can be created by adding -v multiple times: $ docker run -t -d -P -v /data -v /logs --name f20 fedora /bin/bash The VOLUME instruction can be used in a Dockerfile to add data volume as well by adding something similar to VOLUME ["/data"]. We can use the inspect command to look at the data volume details of a container: $ docker inspect -f "{{ .Config.Volumes }}" f20 $ docker inspect -f "{{ .Volumes }}" f20 If the target directory is not there within the container, it will be created. Next, we mount a host directory as a data volume. We can also map a host directory to a data volume with the -v option: $ docker run -i -t -v /source_on_host:/destination_on_container fedora /bin/bash Consider the following example: $ docker run -i -t -v /srv:/mnt/code fedora /bin/bash This can be very useful in cases such as testing code in different environments, collecting logs in central locations, and so on. We can also map the host directory in read-only mode as follows: $ docker run -i -t -v /srv:/mnt/code:ro fedora /bin/bash We can also mount the entire root filesystem of the host within the container with the following command: $ docker run -i -t -v /:/host:ro fedora /bin/bash If the directory on the host (/srv) does not exist, then it will be created, given that you have permission to create one. Also, on the Docker host where SELinux is enabled and if the Docker daemon is configured to use SELinux (docker -d --selinux-enabled), you will see the permission denied error if you try to access files on mounted volumes until you relabel them. To relabel them, use either of the following commands: $ docker run -i -t -v /srv:/mnt/code:z fedora /bin/bash $ docker run -i -t -v /srv:/mnt/code:Z fedora /bin/bash Now, create a data volume container. While sharing the host directory to a container through volume, we are binding the container to a given host, which is not good. Also, the storage in this case is not controlled by Docker. So, in cases when we want data to be persisted even if we update the containers, we can get help from data volume containers. Data volume containers are used to create a volume and nothing else; they do not even run. As the created volume is attached to a container (not running), it cannot be deleted. For example, here's a named data container: $ docker run -d -v /data --name data fedora echo "data volume container" This will just create a volume that will be mapped to a directory managed by Docker. Now, other containers can mount the volume from the data container using the --volumes-from option as follows: $ docker run -d -i -t --volumes-from data --name client1 fedora /bin/bash We can mount a volume from the data volume container to multiple containers: $ docker run -d -i -t --volumes-from data --name client2 fedora /bin/bash We can also use --volumes-from multiple times to get the data volumes from multiple containers. We can also create a chain by mounting volumes from the container that mounts from some other container. How it works… In case of data volume, when the host directory is not shared, Docker creates a directory within /var/lib/docker/ and then shares it with other containers. There's more… Volumes are deleted with -v flag to docker rm, only if no other container is using it. If some other container is using the volume, then the container will be removed (with docker rm) but the volume will not be removed. The Docker registry, which by default starts with the dev flavor. In this registry, uploaded images were saved in the /tmp/registry folder within the container we started. We can mount a directory from the host at /tmp/registry within the registry container, so whenever we upload an image, it will be saved on the host that is running the Docker registry. So, to start the container, we run the following command: $ docker run -v /srv:/tmp/registry -p 5000:5000 registry To push an image, we run the following command: $ docker push registry-host:5000/nkhare/f20 After the image is successfully pushed, we can look at the content of the directory that we mounted within the Docker registry. In our case, we should see a directory structure as follows: /srv/ ├── images │ ├── 3f2fed40e4b0941403cd928b6b94e0fd236dfc54656c00e456747093d10157ac │ │ ├── ancestry │ │ ├── _checksum │ │ ├── json │ │ └── layer │ ├── 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 │ │ ├── ancestry │ │ ├── _checksum │ │ ├── json │ │ └── layer │ ├── 53263a18c28e1e54a8d7666cb835e9fa6a4b7b17385d46a7afe55bc5a7c1994c │ │ ├── ancestry │ │ ├── _checksum │ │ ├── json │ │ └── layer │ └── fd241224e9cf32f33a7332346a4f2ea39c4d5087b76392c1ac5490bf2ec55b68 │ ├── ancestry │ ├── _checksum │ ├── json │ └── layer ├── repositories │ └── nkhare │ └── f20 │ ├── _index_images │ ├── json │ ├── tag_latest │ └── taglatest_json See also The documentation on the Docker website at https://docs.docker.com/userguide/dockervolumes/ http://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/ http://container42.com/2014/11/03/docker-indepth-volumes/ Linking two or more containers With containerization, we would like to create our stack by running services on different containers and then linking them together. However, we can also put them in different containers and link them together. Container linking creates a parent-child relationship between them, in which the parent can see selected information of its children. Linking relies on the naming of containers. Getting ready Make sure the Docker daemon is running on the host and you can connect through the Docker client. How to do it… Create a named container called centos_server: $ docker run -d -i -t --name centos_server centos /bin/bash Now, let's start another container with the name client and link it with the centos_server container using the --link option, which takes the name:alias argument. Then look at the /etc/hosts file: $ docker run -i -t --link centos_server:server --name client fedora /bin/bash How it works… In the preceding example, we linked the centos_server container to the client container with an alias server. By linking the two containers, an entry of the first container, which is centos_server in this case, is added to the /etc/hosts file in the client container. Also, an environment variable called SERVER_NAME is set within the client to refer to the server. There's more… Now, let's create a mysql container: $ docker run --name mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql Then, let's link it from a client and check the environment variables: $ docker run -i -t --link mysql:mysql-server --name client fedora /bin/bash Also, let's look at the docker ps output: If you look closely, we did not specify the -P or -p options to map ports between two containers while starting the client container. Depending on the ports exposed by a container, Docker creates an internal secure tunnel in the containers that links to it. And, to do that, Docker sets environment variables within the linker container. In the preceding case, mysql is the linked container and client is the linker container. As the mysql container exposes port 3306, we see corresponding environment variables (MYSQL_SERVER_*) within the client container. As linking depends on the name of the container, if you want to reuse a name, you must delete the old container. See also Documentation on the Docker website at https://docs.docker.com/userguide/dockerlinks/ Summary In this article, we learned how to connect a container with another container, in the external world. We also learned how we can share external storage from other containers and the host system. Resources for Article: Further resources on this subject: Giving Containers Data and Parameters [article] Creating your infrastructure using Chef Provisioning [article] Unboxing Docker [article]
Read more
  • 0
  • 0
  • 2049

article-image-megaman-clone-unity-part-2
Travis and
10 Jun 2015
6 min read
Save for later

Megaman Clone with Unity Part 2

Travis and
10 Jun 2015
6 min read
Creating a Weapon You will remember that back in Part 1 we made our simple MegaMan clone. Let's take this project further. So, first off let's create a weapon. Now, we're not going into minute details like creating an actual weapon for our hero, but let's create a bullet, or else it's going to be hard to shoot enemies! Create a sphere called "Bullet," and change all of its scale values to 0.2, attaching a new material to it that will be yellow. Make sure afterwards to make the Bullet a Prefab by dragging it into the Project Assets folder. Lastly, remove the sphere collider, and add a circle collider 2D to it. Now that we have our bullet, let's create a new script called "Weapon", and attach it to our Player Object. We'll also create another new script called "Bullet" and attach it to our Bullet Prefab. In fact we didn't do it in the last post, but let's actually make the Player object a prefab. Now first, open the PlayerMovement script and make a quick edit. Ok, so we have created a new enum state called Direction, and an associated property called playerDirection, that is going to keep track of what way our player is currently facing. We also created a property, because nothing else but our PlayerMovement script should change our players direction. Also this stops it from appearing in the inspector, which if it was there, could eventually start cluttering things if our designers are not really supposed to be touching that. Lastly, in our MovePlayer method called every update, we add a simple if statement to keep track of what way our player moved last. Note that it is not affected at 0, this is because we want to know the last direction moving, so if our player is at a standstill, we still want to shoot the previous way clicked. Alright, let's open our Bullet.cs script and quickly make some edits to it! So we now have our bullet that will move in a direction based on its own direction state. All we need is a part to manage all of these interactions. This will be our weapon script, so let's open that now! Ok, we have what is essentially a manager of these two together. This one will wait for a user's input, create the bullet, and then set its direction depending on the players current direction. We use the Fire1 button so that this can be changed later in the Input manager and work on other controllers easily. Now, we do want to point out something with our connection between the playerMovement class and the bulletDirection class. First, we have a very tight coupling on these classes, which isn't great, but for the continuation of this post, we're going to skip it. But if you to wish know more about this we suggest researching delegates and events, as well as decoupling in Unity. For now though, this will do. Creating an Enemy Next let’s create an enemy for this bullet to interact with. So let's create a cube, make him red with a material, and then give him the tag "Enemy" as well as the name "Enemy". Take off the box collider, and attach a box collider 2D, as well as a rigidbody2D. Lastly, make this enemy a prefab. It should look like the following in the Inspector. Now to make sure our player and bullet don't bump each other anymore, let's quickly take that out of the physicsManager. First, create three layers, "Bullet", "Player", and "Enemy". Each of these three game objects should be put on their respective layers. Now in the PhysicsManager under Edit _> Project Settings -> Physics 2D, make sure that the player and bullet classes are NOT checked, so they no longer respond to each other. Okay, now let's create an "Enemy" script and attach it to the Enemy game object. In here, we have a very simple script that just contains a health int, and a method to adjust the health of our enemy. Realistically our player class should have a very similar set up, but for the sake of scope, we can just do this for our enemy. Also, when our enemy class takes enough damage, we destroy that game object. Now we're going to have to change our Bullet script as well to know what to do with this class. We've added a couple of things. First, we now have a damage int at the top of our class that is used to measure the damage this bullet will do to our enemy. We could for example, hold down the shoot button, which increases the damage of our bullet. For this, we'll just keep it at a base amount. Next, we add the OnCollisionEnter2D method, which is going to handle what to do if our bullet interacts with an enemy. If the collided with object is an enemy, our bullet will call the Damage method in the enemy class, and then destroy itself afterwards. In honesty, we could actually put that destroy outside the if statement so that no matter what the bullet hit it would destroy itself. So now if we try our game we have an enemy in the game world who after two hits will actually die. Yes I know he's not really in any danger right now, but this is a great start for finding hittable targets! If this project continued, the next thing added should be a simple enemy movement script, some weapons perhaps for our enemies, and then some simple level design! For more Unity game development tutorials visit our dedicated Unity page here. About the Authors Denny is a Mobile Application Developer at Canadian Tire Development Operations. While working, Denny regularly uses Unity to create in-store experiences, but also works on other technologies like Famous, Phaser.IO, LibGDX, and CreateJS when creating game-like apps. He also enjoys making non-game mobile apps, but who cares about that, am I right? Travis is a Software Engineer, living in the bitter region of Winnipeg, Canada. His work and hobbies include Game Development with Unity or Phaser.IO, as well as Mobile App Development. He can enjoy a good video game or two, but only if he knows he'll win!
Read more
  • 0
  • 0
  • 4962

article-image-integrating-quick-and-benefits-behavior-driven-development-part-2
Benjamin Reed
08 Jun 2015
9 min read
Save for later

Integrating Quick and the Benefits of Behavior Driven Development (Part 2)

Benjamin Reed
08 Jun 2015
9 min read
To continue the discussion about sufficient testing on the iOS platform, I think it would be best to break apart a simple application and test from the ground up. Due to copyright laws, I put together a simple calculator for time. It’s called TimeMath, and it is by no means finished. I’ve included all the visual assets and source code for the project. The goal is that readers can follow along with this tutorial. The Disclaimer Before we begin, I must note that this application was simply made for the person of demonstrating proper testing. While it has the majority of its main functionality implemented, it jokingly asks you for “all your money” to enable the features which cease to exist. There aren’t any NSLayoutContraints, so there are no guarantees as to how it looks on any simulator device besides the iPhone 6 Plus. By doing this, there are no warnings at and before compile time. Also, if a reader would like to make some changes in the simulator, they won’t have to worry about resetting constraints. That is something that would definitely need to be covered in a separate post. There are many types of tests in the software world. Unfortunately, there is not enough time to sufficiently cover all of the material. Unit testing is the fundamental building block that was covered in the first part of this series. Automated UI tests are very powerful, because these allow a developer to test direct interactions with the user interface. If every potential interaction is recorded and performed whenever tests are run, UI issues are likely to be caught early and often. However, there are some unfortunate coincidences. The most popular frameworks in which tests are composed in Objective-C and Swift use undocumented Apple APIs. If these tests are not removed from the bundle before the app is submitted, Apple will reject it. When it comes to Apple’s solution, it doesn’t utilize their language (it’s JavaScript), and it revolves around the Instruments application. For these reasons, I have chosen to solely focus on unit tests. In the previous post, a comparison was given between testing for web development and iOS development. In many cases, web developers utilize automated UI tests. For instance, Capybara is a popular automated UI testing option in the Ruby world. This is definitely an area where the iOS community could improve. However, the provided information should be reusable and adaptable when it comes to any modern iOS project. The Map of the App This app is for those moments when a user cannot remember time arithmetic. It is designed to look and behave similar to the factory-installed calculator app. It allows for simple calculations between hours and minutes. As you can imagine, it is remarkably simple. There are two integer arrays, heap1 and heap2. They deal with four integers each. This should make up the combinations of minutes and hours. When an operator is selected and the equivalence button is tapped, these integers are converted into an integer representation of the minutes. The operation is performed, and the hour portion of the solution is found by dividing the result by 60. The remainder of this division serves as the minute portion. In order to keep it simple, seconds, milliseconds, and beyond are not supported! There was a challenge when it came to entering the time. Whenever an operator or equivalence button was tapped, all of the remaining (unassigned) elements in the array need to be set to zero. In the code this is done twice, during changes to the labels and during the final computation. This could be a problematic area. If the zeros aren’t added appropriately, the entire solution is wrong. This will be extensively tested below. The Class and Ignore “Rules” One of the best practices is to have a test class (also referred to as a ‘spec’) for each class in your code. It is generally good to have at least one test for each method; however, we’ll discuss when this can become redundant. There shouldn’t be any exceptions to the “rule.” Even a thorough implementation of a stack, queue, list, and tree could be tested. After all, these data structures must follow the strict definitions in order for ideas to accurately flow from the library’s architect to the developer. When it comes to iOS, there can be classes for models, views, and controllers. Generally, all of these should be tested as well. In TimeMath (excluding the TimeMathTests group), there are three major classes: AppDelegate, ViewController, PrettyButton. To begin, we are not going to test the AppDelegate. I can honestly say that I have never tested it in my life. There are some apps designed to run in the background, and they need to persist data between states. However, the background behaviors and data persistence tasks often belong in their own classes. Next, we need to test the ViewController class. There is definitely a lot covered in this class, so ViewControllerSpec will become our primary focus. Finally, we will avoid testing the PrettyButton class. The class’ only potential for unit tests lies in making sure the appropriate backgroundColor is set based on the style property. However, this would just be an equivalence expectation for the color. When it comes to testing, I believe, the “ignore rule” is an equally important practice. Everything has the potential to be tested. However, good software engineers know how to find adequate ways to cover their classes without testing each possible, redundant combination. In this example, say I wanted to test that every time which could be entered is displayed appropriately. Based on the 10 digits, which are the possibilities, and 4 allocated spaces, I would need to write 10,000 tests! Now, all engineers can reach a consensus that this is not a good practice. Similar to the concept of proof in mathematics, one does not attempt to show every possible combination to prove a conjecture. The same should apply to unit testing. Likewise, one does not “re-invent the wheel” by re-proving every theorem that led to their conjecture. In software engineering terms, you should only test your code. Don’t bother testing Apple’s API or frameworks that you have absolutely no control over. That simply adds to work with an unnoticeable benefit. Testing the ViewController While it may be common sense in this scenario, an engineer would have to use this same logic to deduce which tests would be included in the ViewControllerSpec. For instance, each numeric button tapped does not need a separate test (despite being an individual method). These are simply event handlers, and each one calls the exact same method: addNumericToHeaps(...). Since this is the case, it makes sense to only test that method. The addNumericToHeaps(...) method is responsible for adding the number to the either heap1 or heap2, and then it relies on the setLabels(...) method to set the display. Our tests may look something like this: it("should add a number to heap1") { // 01:00 vc.tapEvent_1() vc.tapEvent_0() vc.tapEvent_0() expect(vc.lab_focused.text).to(equal("01:00")) } it("should add and display a number for heap2 when operator tapped") { // 00:01 vc.tapEvent_1() vc.tapEvent_ADD() // 02:00 vc.tapEvent_2() vc.tapEvent_0() vc.tapEvent_0() expect(vc.lab_focused.text).to(equal("02:00")) } it("should display heap1's number in tiny label when heap2 active") { // 00:01 vc.tapEvent_1() vc.tapEvent_ADD() // 02:00 vc.tapEvent_2() vc.tapEvent_0() vc.tapEvent_0() expect(vc.lab_unfocused.text).to(equal("00:01")) } Now, we must test the composition(...) method! This method assumes unclaimed places in the array are zeros, and it converts the time to an integer representation (in minutes). We’ll write tests for each, like so: it("should properly find composition of heaps by adding a single zero") { // numbers entered as 1-2-4 vc.heap1 = [4,2,1] vc.composition(&vc.heap1) expect(vc.heap1).to(contain(4)) expect(vc.heap1).to(contain(2)) expect(vc.heap1).to(contain(1)) expect(vc.heap1).to(contain(0)) } it("should properly find composition of heaps by adding multiple zeros") { // numbers entered as 1 vc.heap1 = [1] vc.composition(&vc.heap1) expect(vc.heap1[0]).to(equal(1)) expect(vc.heap1[1]).to(equal(0)) expect(vc.heap1[2]).to(equal(0)) expect(vc.heap1[3]).to(equal(0)) } it("should properly find composition of heaps by converting to minutes") { // numbers entered as 1-0-0 vc.heap1 = [0,0,1] let minutes = vc.composition(&vc.heap1) expect(minutes).to(equal(60)) } Conclusion All in all, I sincerely hope that the iOS community hears the pleas from our web development friends and accepts the vitality of testing. Furthermore, I truly want all readers to see unit testing in a new light. This two-part series is intended to open the doors to the new world of BDD. This world thrives outside of XCTest, and it is one that stresses readability and maintainability. I have become intrigued by the Quick project, and, personally, I have found myself more inline with testing. When it comes to these posts, I’ve added my own spin (and opinions) in hopes that it will lead you to draft your own. Give Quick a try and see if you feel more comfortable writing your tests. As for the app, it is absolutely free for any hacking, and it would bring me tremendous pleasure to see it finished and released on the App Store. Thanks for reading! About the author Benjamin Reed began Computer Science classes at a nearby university in Nashville during his sophomore year in high school. Since then, he has become an advocate for open source. He is now pursing degrees in Computer Science and Mathematics fulltime. The Ruby community has intrigued him, and he openly expresses support for the Rails framework. When asked, he believes that studying Rails has led him to some of the best practices and, ultimately, has made him a better programmer. iOS development is one of his hobbies, and he enjoys scouting out new projects on GitHub. On GitHub, he’s appropriately named @codeblooded. On Twitter, he’s @benreedDev.
Read more
  • 0
  • 0
  • 961

article-image-what-bi-and-what-are-bi-tools-microsoft-dynamics-gp
Packt
05 Jun 2015
13 min read
Save for later

What is BI and What are BI Tools for Microsoft Dynamics GP?

Packt
05 Jun 2015
13 min read
In this article by Belinda Allen and Mark Polino, authors of the book Real-world Business Intelligence with Microsoft Dynamics GP, we will define BI and discuss the BI tools for Microsoft Dynamics GP. (For more resources related to this topic, see here.) What is BI and how do I get it? So let's define BI with no assumptions. To us, BI is the ability to make decisions based on accurate and timely information. It's neither a report nor dashboard, nor is it just data. It is the insight obtained from the content and its presentation that gives us the information essential to make sound decisions for our business. It is your insight and experience combined with your data. Imagine going to a dinner party and seeing a bowl of green beans with almonds on the table. You love green beans; they are your favorite vegetable. However, you have a nut allergy, and you visually see almonds with the green beans, so you know not to eat the beans. If we asked you, "Why aren't you eating the green beans, aren't they your favorite?" You'll respond, "I see almonds and I'm allergic to almonds." It's your knowledge combined with the visual of the dish that provides you with personal intelligence to stay away from the beans. When you are trying to determine what BI your business or organization needs, ask yourself what information would make it easier for your firm to obtain its goals. Ask what problems you have and what information would help solve or prevent them from happening again. Focusing on a report or dashboard first will limit your options unnecessarily. As fast as the economy and technology change, one bad or misinformed decision can ruin your company and/or your career. Out-of-the-box BI tools for Microsoft Dynamics GP The following are all the tools that work with GP and are considered native or out-of-the-box as they come with GP or are a part of the Microsoft stack of technology. Some of these tools are included in the price of GP and others must be purchased separately. We won't use all of these tools, no one has that much time! We do want to make sure that you are aware of their existence and understand what each tool does. The tools are in no particular order; this isn't a beauty pageant or a top ten list. Business Analyzer This is a metric or Key Performance Indicator (KPI) tool that comes with Microsoft Dynamics GP. This tool is role based and includes over 150 reports out-of-the-box. These reports or metrics can be run from within GP, outside of GP, on a Microsoft Surface via an app from the Microsoft App Store, and even on an iPad with the Business Analyzer app. Business Analyzer uses reports that are built-in and can be edited with Microsoft SQL Server Reporting Services. Business Analyzer with SQL Security is secure and easy to use. Reports can be displayed as a dashboard, chart, or tabular with drill back right into GP data: Management Reporter reports and Excel reports can even be added to the Windows App and iPad App versions. This tool is best used for dashboards where the data can be represented in small charts or graphs along with the Management Reporter reports representing what you want to see. SQL Server Reporting Services SQL Server Reporting Services (SSRS) is a report-writing tool based directly on the data coming from Microsoft SQL Server. Reports can be created using tabular, graphical, or free form format. Reports can be launched in Business Analyzer, on the GP home page within many GP cards and transaction windows, or in Microsoft SharePoint. The following screenshot shows six SSRS (out-of-the-box with GP) reports being used to make the home page (for this user only) dashboard. This makes the home page in GP a custom experience for each and every user, providing the user with the information that is important to them: Like Business Analyzer, SSRS is a great tool for repetitive analysis. It's not as useful for ad hoc analysis. Microsoft Excel Although Microsoft Excel is not included with Microsoft Dynamics GP, it is likely to be a tool you already own and like using. Microsoft Dynamics GP includes Excel-based reports that are connected to be completely refreshable with new data with just a click. This means no more exporting to Excel and then formatting, only repeating the task the next time you need the report. Now, you can pull the data into Excel and then format and save it. The next time you need the report, open the Excel file, select Data and Refresh (or even have it auto refresh) with formatting intact and with no extra effort. This allows Excel to be your report writer with data integrated automatically, so there is no need to balance Excel with GP. Quit thinking of Excel as a big calculator, and focus on its analytical power. Excel is incredibly powerful for both repetitive and ad hoc analyses. Excel is really less of a tool and more like a hardware store. We are by no means suggesting that a large number of Excel reports become your BI. Instead, we are suggesting that you use Excel to extract data from the source, using it as a formatting tool and data delivery tool. The following screenshot is an example of using Excel to format refreshable data into a dashboard, using Excel as a report delivery tool. The following report is actually the first report we will build: Microsoft Excel PowerPivot PowerPivot is a tool in Excel 2013—Office Professional Plus that enables you to perform data mashups (combining data from two or more sources, such as GP and Microsoft CRM) and data exploration, using billions of rows of data at a super fast speed. We refer to this as pivot tables on steroids! This is accomplished through the use of the data model. The data model is an in-memory data storage device with row based compression. That data is stored as a part of the file but is not visible in the Excel spreadsheet, unless you choose to display it (or a part of it). This is how a single Excel file can handle billions of rows, bypassing the normal row and column limitations of the Excel spreadsheet. The data model can also receive data from multiple sources, allowing you to make custom links, and even custom fields, by using Data Analysis Expressions (DAX). It is through PowerPivot's data model that Excel can create a single pivot table/chart on the data from multiple sources. This is a great tool when you want to share data offline with others: Microsoft Excel Power Query Power Query is a great new tool that allows you to conform, combine, split, merge, and mash up your data from GP and other sources, including public websites (such as Wikipedia and some government sites) and even some private websites. These queries can then be shared with other users via Microsoft Power BI for Office 365. Think of it as SmartList objects outside of Dynamics GP. Power Query uses an Excel spreadsheet and/or the data model from PowerPivot to hold the data it captures and cleanses. What makes this an exciting tool is its ability to gather all kinds of data from all kinds of sources, combine it, and use it in Excel. PowerPivot can import data and contain it, while Power Query can import or link to data and use PowerPivot to contain it. Why is this small difference important? Power Query is more flexible in the types of connections it can make. Also, Power Query is the data editing tool of the new Power BI dashboard-ing tool: Microsoft Excel Power Map Power Map is a great way to visually see and even fly across your data as a 3D geographical representation. Why is this considered a BI tool? Imagine seeing your sales represented on a map, showing total sales or gross margin. Does one product or product line sell better in the North than the South? Does it sell better in the fall in the East and in summer in the West? Where should you put your new warehouse in order for it to be close to your customer base? Power Maps are not always the best fit for your BI, but when they do fit, you can sure learn a lot about your data. The following screenshot shows sales leads and their estimated value by the salesperson from Microsoft CRM data: Microsoft Power BI Microsoft Power BI is a stand-alone website/dashboard tool that allows you to create your own dashboard, with refreshable links from a large variety of data sources. Included with this tool is a free App that displays the data from the website. One of the most amazing features of Microsoft Power BI is the Q&A feature. If you upload an Excel table into the dashboard, you can ask questions about the data, in natural language, just like you do in Microsoft Bing. The results of your questions will be a visual representation of the answer. It could be a graph, chart, table, map, and so on. If this is something you ask a lot, you can simply pin it to the dashboard as a new chart. This tool is amazing for managers, executives, owners, and board members alike. It gives a quick insight into timely data, right at their fingertips: Microsoft Excel Power View Power View is a tool in Excel 2013—Office Professional Plus that enables you to represent your data in a more graphic representation than those of a traditional pivot table or chart. For example, you can graph your sales for each state on an actual map of the U.S., highlighting visually where your biggest sales come from without reading any numbers. This is a simple dashboard tool that allows for easy filtering. This tool works very well for those individuals who want to see data in a dashboard format, with the ability to filter either a single part of the dashboard or the entire dashboard. Power View can use data from an Excel spreadsheet, or data in a PowerPivot data model. Again, this allows for multiple data sources and large amounts of data to be used on a single dashboard: GP Analysis Cubes library This module in GP allows you to organize your data into analysis cubes that allows users to evaluate or create reports from different angles or formats using pivot tables. The same chunk or cube of data can be used to evaluate inventory sold, sales revenue, sales commission, returns of items, profitability of sales, and so on. These cubes are designed specifically to analyze the GP database, using the SQL Server Analysis Services (SSAS) or Online Analytical Processing (OLAP) database. Analysis Cubes create a warehouse of data from GP for the purpose of reporting. Reporting from the cubes rather than from the production data, frees the server's resources for GP activity. Modifying cubes or connecting them to additional data sources will often require expert help: SmartList and SmartList Designer SmartList is an ad hoc query tool that comes with Microsoft Dynamics GP. It is in a tabular format and can be exported to Excel or Word. Custom SmartList objects can be created using the GP tool SmartList Designer. Although SmartList is an invaluable tool for GP use, for BI purposes, we prefer to go directly to Excel. SmartList exports of large datasets are painfully slow; a root canal level of pain. Excel reports are fast and easily reusable. If you create a SmartList and export it to Excel for each use, you will need to reformat the Excel document each and every time. There are ways to avoid reformatting, but even those take a lot of effort. SmartList Designerallows users to create and build their own SmartList objects. Although there are many great SmartList objects already built-in, they do not always fit your needs exactly. A good example of this would be Payables Transactions. All documents display as a positive amount since it is a list of documents. Many users want to see the document and its effect on the AP account itself (for example, returns are negatives and invoices are positive). If this is how you want your list to be displayed, you can do this through SmartList Designer: Management Reporter We often become so focused on using Management Reporter (or FRx) for balance sheets, profit and loss statements, and cash flow statements that we forget the value already built in our financial statement tool. Imagine taking your profit and loss statement (or statement of activities for not-for-profits) and removing the budget column, or splitting MTD into weeks and comparing each week of the month, or even week 1 of this month to week 1 of last month. All this would take is a new column format and "poof"—access to a new and amazing trend reporting! The following illustration is a Weekly Material Usage Report from Management Reporter. From this report, managers can see a giant spike in the last week of January that would not be visible in a report that only displayed month-to-date information: Microsoft SharePoint Microsoft SharePoint is server software (and does not come with GP) or an online tool in Office 365 that creates a central point for work to be shared and collaboration to occur. This product is what it is named, SharePoint, a point for sharing. Anyway… This is a good spot to have BI content exist for version control and sharing. The Microsoft social networking tool, Yammer, extends SharePoint into an even better collaboration tool. There is a large variety of additional BI tools available through the SharePoint arena which are awesome. However, we wanted to stick with tools that you'll likely already own, or can obtain easily and take off running on your own. So, we'll leave SharePoint off the table. Microsoft Dynamics GP Workspace for Office 365 In Microsoft SharePoint for Office 365, you can create a custom workspace using Dynamics GP 2013 R2 or higher. Here, you can store your reports, creating a truly collaborative environment. We'll not be getting into this much, but we did want to give it a shout out. It's a great storage place for your reports and an excellent starting spot. Summary We reviewed what BI is and why it's important. We've also identified many of the tools that you probably already own and may even have installed. Resources for Article: Further resources on this subject: Financial Management with Microsoft Dynamics AX 2012 R3 [article] Diagnostic leveraging of the Accelerated POC with the CRM Online service [article] Interacting with Data for Dashboards [article]
Read more
  • 0
  • 0
  • 2333

article-image-deploy-game-heroku
Daan van
05 Jun 2015
13 min read
Save for later

Deploy a Game to Heroku

Daan van
05 Jun 2015
13 min read
In this blog post we will deploy a game to Heroku so that everybody can enjoy it. We will deploy the game *Tag* that we created in the blog post [Real-time Communication with SocketIO]. Heroku is a platform as a service (PaaS) provider. A PaaS is a:category of cloud computing services that provides a platform allowing customers to develop, run and manage Web applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app. Pricing Nothing comes for free. Luckily Heroku has a pay as you grow pricing philosophy. This means that if you start out using Heroku server in moderation, you are free to use it. Only when your app starts to use more resources you need to pay, or let your application be unavailable for a while. Follow Along If you want to follow along deploying the Tag server to Heroku, download follow-along. Unzip it in a suitable location and enter it. Heroku depends on Git for the deployment process. Make sure you download and install Git for your platform, if you have not already done so. With Git installed enter the Tag-follow-along-deployment directory, initialize a Git repository, add all the files and make a commit with the following commands cd Tag-follow-along-deployment git init git add . git commit -m "following along" If you want to know what the end result looks like, take a peek. Signing Up You need to register with Heroku in order to start using their services. You can sign up with a form where you provide Heroku with your full name, your email-address and optionally a company name. If you have not already signed up, do so now. Make sure to read Heroku's terms of service and their privacy statement. Heroku Toolbelt Once you have signed up, you can start downloading the Heroku toolbelt. The toolbelt is Heroku's workhorse. It is a set of command line tools that are responsible for running your application locally, deploying the application to Heroku, starting, stopping and scaling the application and monitoring the application state. Make sure to download the appropriate toolbelt for your operating system. Login In Having installed the Heroku toolbelt it is now time to login with the same credentials we signed up with. Issue the command: heroku login And provide it with the correct email and password. The command should responds with Authentication successful. Create an App With Heroku successfully authenticating us we can start creating an app. This is done with the heroku create command. When issued, the Heroku toolbelt will start working to create an app on the Heroku servers, give it an unique, albeit random, name and add a remote to your Git repository. heroku create It responded in my case with Creating peaceful-caverns-9339... done, stack is cedar-14 https://peaceful-caverns-9339.herokuapp.com/ | https://git.heroku.com/peaceful-caverns-9339.git Git remote heroku added If you run the command the names and URLs could be different, but the overall response should be similar. Remote A remote is a tracked repository, i.e. a repository that is related to the repository you're working on. You can inspect the tracked repositories with the git remote command. It will tell you that it tracks the repository known by the name heroku. If you want to learn more about Git remotes, see the documentation. Add a Procfile A Procfile is used by Heroku to configure what processes should run. We are going to create one now. Open you favorite editor and create a file Procfile in the root of the Tag-follow-along-deployment. Write the following content into it: web: node server.js This tells Heroku to start a web process and let it run node server.js. Save it and then add it to the repository with the following commands: git add Procfile git commit -m "Configured a Procfile" Deploy your code The next step is to deploy your code to Heroku. The following command will do this for you. git push heroku master Notice that this is a Git command. What happens is that the code is pushed to Heroku. This triggers Heroku to start taking the necessary steps to start your server. Heroku informs you what it is doing. The run should look similar to the output below: counting objects: 29, done. Delta compression using up to 8 threads. Compressing objects: 100% (26/26), done. Writing objects: 100% (29/29), 285.15 KiB | 0 bytes/s, done. Total 29 (delta 1), reused 0 (delta 0) remote: Compressing source files... done. remote: Building source: remote: remote: -----> Node.js app detected remote: remote: -----> Reading application state remote: package.json... remote: build directory... remote: cache directory... remote: environment variables... remote: remote: Node engine: unspecified remote: Npm engine: unspecified remote: Start mechanism: Procfile remote: node_modules source: package.json remote: node_modules cached: false remote: remote: NPM_CONFIG_PRODUCTION=true remote: NODE_MODULES_CACHE=true remote: remote: -----> Installing binaries remote: Resolving node version (latest stable) via semver.io... remote: Downloading and installing node 0.12.2... remote: Using default npm version: 2.7.4 remote: remote: -----> Building dependencies remote: Installing node modules remote: remote: > ws@0.5.0 install /tmp/build_bce51a5d2c066ee14a706cebbc28bd3e/node_modules/socket.io/node_modules/engine.io/node_modules/ws remote: > (node-gyp rebuild 2> builderror.log) || (exit 0) remote: remote: make: Entering directory `/tmp/build_bce51a5d2c066ee14a706cebbc28bd3e/node_modules/socket.io/node_modules/engine.io/node_modules/ws/build' remote: CXX(target) Release/obj.target/bufferutil/src/bufferutil.o remote: SOLINK_MODULE(target) Release/obj.target/bufferutil.node remote: SOLINK_MODULE(target) Release/obj.target/bufferutil.node: Finished remote: COPY Release/bufferutil.node remote: CXX(target) Release/obj.target/validation/src/validation.o remote: SOLINK_MODULE(target) Release/obj.target/validation.node remote: SOLINK_MODULE(target) Release/obj.target/validation.node: Finished remote: COPY Release/validation.node remote: make: Leaving directory `/tmp/build_bce51a5d2c066ee14a706cebbc28bd3e/node_modules/socket.io/node_modules/engine.io/node_modules/ws/build' remote: remote: > ws@0.4.31 install /tmp/build_bce51a5d2c066ee14a706cebbc28bd3e/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/ws remote: > (node-gyp rebuild 2> builderror.log) || (exit 0) remote: remote: make: Entering directory `/tmp/build_bce51a5d2c066ee14a706cebbc28bd3e/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/ws/build' remote: CXX(target) Release/obj.target/bufferutil/src/bufferutil.o remote: make: Leaving directory `/tmp/build_bce51a5d2c066ee14a706cebbc28bd3e/node_modules/socket.io/node_modules/socket.io-client/node_modules/engine.io-client/node_modules/ws/build' remote: express@4.12.3 node_modules/express remote: ├── merge-descriptors@1.0.0 remote: ├── utils-merge@1.0.0 remote: ├── cookie-signature@1.0.6 remote: ├── methods@1.1.1 remote: ├── cookie@0.1.2 remote: ├── fresh@0.2.4 remote: ├── escape-html@1.0.1 remote: ├── range-parser@1.0.2 remote: ├── content-type@1.0.1 remote: ├── finalhandler@0.3.4 remote: ├── vary@1.0.0 remote: ├── parseurl@1.3.0 remote: ├── serve-static@1.9.2 remote: ├── content-disposition@0.5.0 remote: ├── path-to-regexp@0.1.3 remote: ├── depd@1.0.1 remote: ├── on-finished@2.2.1 (ee-first@1.1.0) remote: ├── qs@2.4.1 remote: ├── debug@2.1.3 (ms@0.7.0) remote: ├── etag@1.5.1 (crc@3.2.1) remote: ├── send@0.12.2 (destroy@1.0.3, ms@0.7.0, mime@1.3.4) remote: ├── proxy-addr@1.0.8 (forwarded@0.1.0, ipaddr.js@1.0.1) remote: ├── accepts@1.2.7 (negotiator@0.5.3, mime-types@2.0.11) remote: └── type-is@1.6.2 (media-typer@0.3.0, mime-types@2.0.11) remote: remote: nodemon@1.3.7 node_modules/nodemon remote: ├── minimatch@0.3.0 (sigmund@1.0.0, lru-cache@2.6.2) remote: ├── touch@0.0.3 (nopt@1.0.10) remote: ├── ps-tree@0.0.3 (event-stream@0.5.3) remote: └── update-notifier@0.3.2 (is-npm@1.0.0, string-length@1.0.0, chalk@1.0.0, semver-diff@2.0.0, latest-version@1.0.0, configstore@0.3.2) remote: remote: socket.io@1.3.5 node_modules/socket.io remote: ├── debug@2.1.0 (ms@0.6.2) remote: ├── has-binary-data@0.1.3 (isarray@0.0.1) remote: ├── socket.io-adapter@0.3.1 (object-keys@1.0.1, debug@1.0.2, socket.io-parser@2.2.2) remote: ├── socket.io-parser@2.2.4 (isarray@0.0.1, debug@0.7.4, component-emitter@1.1.2, benchmark@1.0.0, json3@3.2.6) remote: ├── engine.io@1.5.1 (base64id@0.1.0, debug@1.0.3, engine.io-parser@1.2.1, ws@0.5.0) remote: └── socket.io-client@1.3.5 (to-array@0.1.3, indexof@0.0.1, debug@0.7.4, component-bind@1.0.0, backo2@1.0.2, object-component@0.0.3, component-emitter@1.1.2, has-binary@0.1.6, parseuri@0.0.2, engine.io-client@1.5.1) remote: remote: -----> Checking startup method remote: Found Procfile remote: remote: -----> Finalizing build remote: Creating runtime environment remote: Exporting binary paths remote: Cleaning npm artifacts remote: Cleaning previous cache remote: Caching results for future builds remote: remote: -----> Build succeeded! remote: remote: Tag@1.0.0 /tmp/build_bce51a5d2c066ee14a706cebbc28bd3e remote: ├── express@4.12.3 remote: ├── nodemon@1.3.7 remote: └── socket.io@1.3.5 remote: remote: -----> Discovering process types remote: Procfile declares types -> web remote: remote: -----> Compressing... done, 12.3MB remote: -----> Launching... done, v3 remote: https://peaceful-caverns-9339.herokuapp.com/ deployed to Heroku remote: remote: Verifying deploy... done. To https://git.heroku.com/peaceful-caverns-9339.git * [new branch] master -> master Scale the App The application is deployed, but now we need to make sure that Heroku assign resources to it. heroku ps:scale web=1 The above command instructs Heroku to scale your app so that one instance of it is running. You should now be able to open a browser and go to the URL Heroku mentioned at the end of the deployment step. In my case that would be https://peaceful-caverns-9339.herokuapp.com/. There is a convenience method that helps you in that regard. The heroku open command will open the registered URL in your default browser. Inspect the Logs If you followed along and open the application you would know that at this point you would have been greeted by an application error: So what did go wrong? Let's find out by inspecting the logs. Issue the following command: heroku logs To see the available logs. Below you find an excerpt: 2015-05-11T14:29:37.193792+00:00 heroku[api]: Enable Logplex by daan.v.berkel.1980+trash@gmail.com 2015-05-11T14:29:37.193792+00:00 heroku[api]: Release v2 created by daan.v.berkel.1980+trash@gmail.com 2015-05-12T08:47:13.899422+00:00 heroku[api]: Deploy ee12c7d by daan.v.berkel.1980+trash@gmail.com 2015-05-12T08:47:13.848408+00:00 heroku[api]: Scale to web=1 by daan.v.berkel.1980+trash@gmail.com 2015-05-12T08:47:13.899422+00:00 heroku[api]: Release v3 created by daan.v.berkel.1980+trash@gmail.com 2015-05-12T08:47:16.548876+00:00 heroku[web.1]: Starting process with command `node server.js` 2015-05-12T08:47:18.142479+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1 2015-05-12T08:47:18.142456+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY) 2015-05-12T08:47:18.676440+00:00 app[web.1]: Listening on http://:::3000 2015-05-12T08:48:17.132841+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch 2015-05-12T08:48:17.132841+00:00 heroku[web.1]: Stopping process with SIGKILL 2015-05-12T08:48:18.006812+00:00 heroku[web.1]: Process exited with status 137 2015-05-12T08:48:18.014854+00:00 heroku[web.1]: State changed from starting to crashed 2015-05-12T08:48:18.015764+00:00 heroku[web.1]: State changed from crashed to starting 2015-05-12T08:48:19.731467+00:00 heroku[web.1]: Starting process with command `node server.js` 2015-05-12T08:48:21.328988+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY) 2015-05-12T08:48:21.329000+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1 2015-05-12T08:48:21.790446+00:00 app[web.1]: Listening on http://:::3000 2015-05-12T08:49:20.337591+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch 2015-05-12T08:49:20.337739+00:00 heroku[web.1]: Stopping process with SIGKILL 2015-05-12T08:49:21.301823+00:00 heroku[web.1]: State changed from starting to crashed 2015-05-12T08:49:21.290974+00:00 heroku[web.1]: Process exited with status 137 2015-05-12T08:57:58.529222+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=peaceful-caverns-9339.herokuapp.com request_id=50cfbc6c-0561-4862-9254-d085043cb610 fwd="87.213.160.18" dyno= connect= service= status=503 bytes= 2015-05-12T08:57:59.066974+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=peaceful-caverns-9339.herokuapp.com request_id=608a9f0f-c2a7-45f7-8f94-2ce2f5cd1ff7 fwd="87.213.160.18" dyno= connect= service= status=503 bytes= 2015-05-12T11:10:09.538209+00:00 heroku[web.1]: State changed from crashed to starting 2015-05-12T11:10:11.968702+00:00 heroku[web.1]: Starting process with command `node server.js` 2015-05-12T11:10:13.905318+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY) 2015-05-12T11:10:13.905338+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1 2015-05-12T11:10:14.509612+00:00 app[web.1]: Listening on http://:::3000 2015-05-12T11:11:12.622517+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch 2015-05-12T11:11:12.622876+00:00 heroku[web.1]: Stopping process with SIGKILL 2015-05-12T11:11:13.668749+00:00 heroku[web.1]: Process exited with status 137 2015-05-12T11:11:13.677915+00:00 heroku[web.1]: State changed from starting to crashed Analyzing the Problem While looking at the log we see that the application got deployed and scaled properly. 2015-05-12T08:47:13.899422+00:00 heroku[api]: Deploy ee12c7d by daan.v.berkel.1980+trash@gmail.com 2015-05-12T08:47:13.848408+00:00 heroku[api]: Scale to web=1 by daan.v.berkel.1980+trash@gmail It then tries to run node server.js: 2015-05-12T08:48:19.731467+00:00 heroku[web.1]: Starting process with command `node server.js` This succeeds because we see the expected Listening on message: 2015-05-12T08:48:21.790446+00:00 app[web.1]: Listening on http://:::3000 Unfortunately, it all breaks down after that. 2015-05-12T08:49:20.337591+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch It retries starting the application, but eventually it gives up. The problem is that we hard-coded our application server to listen on port `3000`, but Heroku expects an other port. Heroku communicates the port to use with the `PORT` environment variable. Using Environment Variables In order to start our application correctly we need to use the environment variable PORT that Heroku provides. We can do that by opening server.js and going to line 15: server.listen(3000, function(){ var host = server.address().address; var port = server.address().port; console.log('Listening on http://%s:%s', host, port); }); This snippet will start the server and it will listening on port 3000. We need to change that value so that it will use the environment variable PORT. This is done with the following code: server.listen(process.env.PORT || 3000, function(){ var host = server.address().address; var port = server.address().port; console.log('Listening on http://%s:%s', host, port); }); process.env.PORT || 3000 will use the PORT environment variable if it is set and will default to port 3000, e.g. for testing purposes. Re-deploy Application We need to deploy our code changes to Heroku. This is done with the following set of commands. git add server.js git commit -m "use PORT environment variable" git push heroku master The first two commands at the changes in server.js to the repository. The third updates the tracked repository with these changes. This triggers Heroku to try and restart the application anew. If you now inspect the log with heroku logs you will see that the application is successfully started. 2015-05-12T12:22:15.829584+00:00 heroku[api]: Deploy 9a2cac8 by daan.v.berkel.1980+trash@gmail.com 2015-05-12T12:22:15.829584+00:00 heroku[api]: Release v4 created by daan.v.berkel.1980+trash@gmail.com 2015-05-12T12:22:17.325749+00:00 heroku[web.1]: State changed from crashed to starting 2015-05-12T12:22:19.613648+00:00 heroku[web.1]: Starting process with command `node server.js` 2015-05-12T12:22:21.503756+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1 2015-05-12T12:22:21.503733+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY) 2015-05-12T12:22:22.118797+00:00 app[web.1]: Listening on http://:::10926 2015-05-12T12:22:23.355206+00:00 heroku[web.1]: State changed from starting to up Tag Time If you now open the application in your default browser with heroku open, you should be greeted by the game of Tag. If you move your mouse around in the Tag square you will see your circle trying to chase it. You can now invite other people to play on the same address and soon you will have a real game of Tag on your hands. Conclusion We have seen that Heroku provides an easy to use Platform as a Service, that can be used to deploy your game server on with the help of the Heroku toolbelt. About the author Daan van Berkel is an enthusiastic software craftsman with a knack for presenting technical details in a clear and concise manner. Driven by the desire for understanding complex matters, Daan is always on the lookout for innovative uses of software.
Read more
  • 0
  • 0
  • 30633

article-image-creating-your-infrastructure-using-chef-provisioning
Packt
05 Jun 2015
5 min read
Save for later

Creating your infrastructure using Chef Provisioning

Packt
05 Jun 2015
5 min read
In this article by Matthias Marschall, author of the book Chef Infrastructure Automation Cookbook - Second Edition, we will "know how to use Chef to manage the software on individual machines and you know how to use knife to bootstrap individual nodes. Chef Provisioning helps you to use the power of Chef to create your whole infrastructure for you. No matter whether you want to create a cluster of Vagrant boxes, Docker instances, or Cloud servers, Chef Provisioning lets you define your infrastructure in a simple recipe and run it idempotently. Let's see how to create a Vagrant machine using a Chef recipe. (For more resources related to this topic, see here.) Getting ready Make sure that you have your Berksfile, my_cookbook and web_server roles ready to create an nginx site. How to do it... Let's see how "to create a Vagrant machine and install nginx "on it: Describe your Vagrant machine in a recipe called mycluster.rb: mma@laptop:~/chef-repo $ subl mycluster.rb require 'chef/provisioning'   with_driver 'vagrant' with_machine_options :vagrant_options => { 'vm.box' => 'opscode-ubuntu-14.04' }   machine 'web01' do role 'web_server' end Install all required cookbooks in your local chef-repo: mma@laptop:~/chef-repo $ berks installmma@laptop:~/chef-repo $ berks vendor cookbooks Resolving cookbook dependencies... Using apt (2.6.1) ...TRUNCATED OUTPUT... Vendoring yum-epel (0.6.0) to cookbooks/yum-epel Run the Chef client in local mode to bring up the Vagrant machine and execute a Chef run on it: mma@laptop:~/chef-repo $ chef-client -z mycluster.rb [2015-03-08T21:09:39+01:00] INFO: Starting chef-zero on host localhost, port 8889 with repository at repository at /Users/mma/work/chef-repo ...TRUNCATED OUTPUT... Recipe: @recipe_files::/Users/mma/work/chef-repo/mycluster.rb * machine[webserver] action converge[2015-03-08T21:09:43+01:00] INFO: Processing machine[web01] action converge (@recipe_files::/Users/mma/work/chef-repo/mycluster.rb line 6) ...TRUNCATED OUTPUT... [2015-03-08T21:09:47+01:00] INFO: Executing sudo chef-client -l info on vagrant@127.0.0.1      [web01] [2015-03-08T20:09:21+00:00] INFO: Forking chef instance to converge...                Starting Chef Client, version 12.1.0                ...TRUNCATED OUTPUT...                Chef Client finished, 18/25 resources updated in 73.839065458 seconds ...TRUNCATED OUTPUT... [2015-03-08T21:11:05+01:00] INFO: Completed chef-client -l info on vagrant@127.0.0.1: exit status 0    - run 'chef-client -l info' on web01 [2015-03-08T21:11:05+01:00] INFO: Chef Run complete in 82.948293 seconds ...TRUNCATED OUTPUT... Chef Client finished, 1/1 resources updated in 85.914979 seconds Change" into the directory where Chef put the Vagrant configuration: mma@laptop:~/chef-repo $ cd ~/.chef/vms Validate that there is a Vagrant machine named web01 running: mma@laptop:~/.chef/vms $ vagrant status Current machine states: web01                 running (virtualbox) Validate that nginx is installed and running on the Vagrant machine: mma@laptop:~/.chef/vms $ vagrant ssh vagrant@web01:~$ wget localhost:80 ...TRUNCATED OUTPUT... 2015-03-08 22:14:45 (2.80 MB/s) - 'index.html' saved [21/21] How it works... Chef Provisioning comes with a selection of drivers for all kinds of infrastructures, including Fog (supporting Amazon EC2, OpenStack, and others), VMware VSphere, Vagrant (supporting Virtualbox and VMware Fusion), various Containers, such as LXC Docker "and Secure Shell (SSH). In this recipe, we make sure that we can use the directives provided by Chef Provisioning by requiring chef/provisioning library. Then, we configure the driver that we want to use. We use Vagrant and tell Chef to use the opscode-ubuntu-14.04 Vagrant box to spin up our machine. Using the machine resource, we ask Chef to spin up a Vagrant machine and configure it using Chef by applying the role web_server. The web_server role uses the cookbook my_cookbook to configure the newly created Vagrant machine. To make sure that all the required cookbooks are available to Chef, we use berks install and berks vendor cookbooks. The berks vendor cookbooks installs all the required cookbooks in the local cookbooks directory. The Chef client can access the cookbooks here, without the need for a Chef server. Finally, we use the Chef client to execute our Chef Provisioning recipe. It will spin up the defined Vagrant machine and execute a Chef client run on it. Chef Provisioning will put the Vagrant Virtual Machine (VM) definition into the directory ~/.chef/vms. To manage the Vagrant VM, you need to change to this directory. There's more... Instead of using the with_driver directive, you can use the CHEF_DRIVER environment variable: mma@laptop:~/chef-repo $ CHEF_DRIVER=vagrant chef-client -z mycluster.rb You can create multiple instances of a machine by using the machine_image directive in your recipe: machine_image 'web_server' do role 'web_server' end 1.upto(2) do |i| machine "web0#{i}" do    from_image 'web_server'   end end See also Find the source code of the Chef Provisioning library at GitHub: https://github.com/chef/chef-provisioning Find" the Chef Provisioning documentation at https://docs.chef.io/provisioning.html Learn how to" set up a Chef server using Chef Provisioning: https://www.chef.io/blog/2014/12/15/sysadvent-day-14-using-chef-provisioning-to-build-chef-server/ Summary This article deals with networking and applications spanning multiple servers. You learned how to create your whole infrastructure using Chef provisioning. Resources for Article: Further resources on this subject: Chef Infrastructure [article] Going Beyond the Basics [article] Getting started with using Chef [article]
Read more
  • 0
  • 0
  • 3405
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-edx-e-learning-course-marketing
Packt
05 Jun 2015
9 min read
Save for later

edX E-Learning Course Marketing

Packt
05 Jun 2015
9 min read
In this article by Matthew A. Gilbert, the author of edX E-Learning Course Development, we are going to learn various ways of marketing. (For more resources related to this topic, see here.) edX's marketing options If you don't market your course, you might not get any new students to teach. Fortunately, edX provides you with an array of tools for this purpose, as follows: Creative Submission Tool: Submit the assets required for creating a page in your edX course using the Creative Submission Tool. You can also use those very materials in promoting the course. Access the Creative Submission Tool at https://edx.projectrequest.net/index.php/request. Logo and the Media Kit: Although these are intended for members of the media, you can also use the edX Media Kit for your promotional purposes: you can download high-resolution photos, edX logo visual guidelines (in Adobe Illustrator and EPS versions), key facts about edX, and answers to frequently asked questions. You can also contact the press office for additional information. You can find the edX Media Kit online at https://www.edx.org/media-kit. edX Learner Stories: Using stories of students who have succeeded with other edX courses is a compelling way to market the potential of your course. Using Tumblr, edX Learner Stories offers more than a dozen student profiles. You might want to use their stories directly or use them as a template for marketing materials of your own. Read edX Learner Stories at http://edxstories.tumblr.com. Social media marketing Traditional marketing tools and the options available in the edX Marketing Portal are a fitting first step in promoting your course. However, social media gives you a tremendously enhanced toolkit you can use to attract, convert, and transform spectators into students. When marketing your course with social media, you will also simultaneously create a digital footprint for yourself. This in turn helps establish your subject matter expertise far beyond one edX course. What's more, you won't be alone; there exists a large community of edX instructors and students, including those from other MOOC platforms already online. Take, for example, the following screenshot from edX's Twitter account (@edxonline). edX has embraced social media as a means of marketing and to create a practicing virtual community for those creating and taking their courses. Likewise, edX also actively maintains a page on Facebook, as follows: You can also see how active edX's YouTube channel is in the following screenshot. Note that there are both educational and promotional videos. To get you started in social media—if you're not already there—take a look at the list of 12 social media tools, as follows. Not all of these tools might be relevant to your needs, but consider the suggestions to decide how you might best use them, and give them a try: Facebook (https://www.facebook.com): Create a fan page for your edX course; you can re-use content from your course's About page such as your course intro video, course description, course image, and any other relevant materials. Be sure to include a link from the Facebook page for your course to its About page. Look for ways to share other content from your course (or related to your course) in a way that engages members of your fan page. Use your Facebook page to generate interest and answer questions from potential students. You might also consider creating a Facebook group. This can be more useful for current students to share knowledge during the class and to network once it's complete. Visit edX on Facebook at https://www.facebook.com/edX. Google+ (https://plus.google.com): Take the same approach as you did with your Facebook fan page. While this is not as engaging as Facebook, you might find that posting content on Google+ increases traffic to your course's About page due to the increased referrals you are likely to experience via Google search results. Add edX to your circles on Google+ at https://plus.google.com/+edXOnline/posts. Instagram (https://instagram.com): Share behind-the-scenes pictures of you and your staff for your course. Show your students what a day in your life is like, making sure to use a unique hashtag for your course. Picture the possibilities with edX on Instagram at https://instagram.com/edxonline/. LinkedIn (https://www.linkedin.com): Share information about your course in relevant LinkedIn groups, and post public updates about it in your personal account. Again, make sure you include a unique hashtag for your course and a link to the About page. Connect with edX on LinkedIn at https://www.linkedin.com/company/edx. Pinterest (https://www.pinterest.com): Share photos as with Instagram, but also consider sharing infographics about your course's subject matter or share infographics or imagers you use in your actual course as well. You might consider creating pin boards for each course, or one per pin board per module in a course. Pin edX onto your Pinterest pin board at https://www.pinterest.com/edxonline/. Slideshare (http://www.slideshare.net): If you want to share your subject matter expertise and thought leadership with a wider audience, Slideshare is a great platform to use. You can easily post your PowerPoint presentations, class documents or scholarly papers, infographics, and videos from your course or another topic. All of these can then be shared across other social media platforms. Review presentations from or about edX courses on Slideshare at http://www.slideshare.net/search/slideshow?searchfrom=header&q=edx. SoundCloud (https://soundcloud.com): With SoundCloud, you can share MP3 files of your course lectures or create podcasts related to your areas of expertise. Your work can be shared on Twitter, Tumblr, Facebook, and Foursquare, expanding your influence and audience exponentially. Listen to some audio content from Harvard University at https://soundcloud.com/harvard. Tumblr (https://www.tumblr.com): Resembling what the child of WordPress and Twitter might be like, Tumblr provides a platform to share behind-the-scenes text, photos, quotes, links, chat, audios, and videos of your edX course and the people who make it possible. Share a "day in the life" or document in real time, an interactive history of each edX course you teach. Read edX's learner stories at http://edxstories.tumblr.com. Twitter (https://twitter.com): Although messages on Twitter are limited to 140 characters, one tweet can have a big impact. For a faculty wanting to promote its edX course, it is an efficient and cost-effective option. Tweet course videos, samples of content, links to other curriculum, or promotional material. Engage with other educators who teach courses and retweet posts from academic institutions. Follow edX on Twitter at https://twitter.com/edxonline. You might also consider subscribing to edX's Twitter list of edX instructors at https://twitter.com/edXOnline/lists/edx-professors-teachers, and explore the Twitter accounts of edX courses by subscribing to that list at https://twitter.com/edXOnline/lists/edx-course-handles. Vine (https://vine.co): A short-format video service owned by Twitter, Vine provides you with 6 seconds to share your creativity, either in a continuous stream or smaller segments linked together like stop motion. You might create a vine showing the inner working of the course faculty and staff, or maybe even ask short questions related to the course content and invite people to reply with answers. Watch vines about MOOCs at https://vine.co. WordPress: WordPress gives you two options to manage and share content with students. With WordPress.com (https://wordpress.com), you're given a selection of standardized templates to use on a hosted platform. You have limited control but reasonable flexibility and limited, if any, expenses. With Wordpress.org (https://wordpress.org), you have more control but you need to host it on your own web server, which requires some technical know-how. The choice is yours. Read posts on edX on the MIT Open Matters blog on Wordpress.com at https://mitopencourseware.wordpress.com/category/edx/. YouTube (https://www.youtube.com): YouTube is the heart of your edX course. It's the core of your curriculum and the anchor of engagement for your students. When promoting your course, use existing videos from your curriculum in your social media campaigns, but identify opportunities to record short videos specifically for promoting your course. Watch course videos and promotional content on the edX YouTube channel at https://www.youtube.com/user/EdXOnline. Personal branding basics Additionally, whether the impact of your effort is immediately evident or not, your social media presence powers your personal brand as a professor. Why is that important? Read on to know. With the possible exception of marketing professors, most educators likely tend to think more about creating and teaching their course than promoting it—or themselves. Traditionally, that made sense, but it isn't practical in today's digitally connected world. Social media opens an area of influence where all educators—especially those teaching an edX course—should be participating. Unfortunately, many professors don't know where or how to start with social media. If you're teaching a course on edX, or even edX Edge, you will likely have some kind of marketing support from your university or edX. But if you are just in an organization using edX Code, or simply want to promote yourself and your edX course, you might be on your own. One option to get you started with social media is the Babb Group, a provider of resources and consulting for online professors, business owners, and real-estate investors. Its founder and CEO, Dani Babb (PhD), says this: "Social media helps you show that you are an expert in a given field. It is an important tool today to help you get hired, earn promotions, and increase your visibility." The Babb Group offers five packages focused on different social media platforms: Twitter, LinkedIn, Facebook, Twitter and Facebook, or Twitter with Facebook and LinkedIn. You can view the Babb Group's social media marketing packages at http://www.thebabbgroup.com/social-media-profiles-for-professors.html. Connect with Dani Babb on LinkedIn at https://www.linkedin.com/in/drdanibabb or on Twitter at https://twitter.com/danibabb Summary In this article, we tackled traditional marketing tools, identified options available from edX, discussed social media marketing, and explored personal branding basics. Resources for Article: Further resources on this subject: Constructing Common UI Widgets [article] Getting Started with Odoo Development [article] MODx Web Development: Creating Lists [article]
Read more
  • 0
  • 0
  • 5369

article-image-getting-started-hyper-v-architecture-and-components
Packt
04 Jun 2015
19 min read
Save for later

Getting Started with Hyper-V Architecture and Components

Packt
04 Jun 2015
19 min read
In this article by Vinícius R. Apolinário, author of the book Learning Hyper-V, we will cover the following topics: Hypervisor architecture Type 1 and 2 Hypervisors Microkernel and Monolithic Type 1 Hypervisors Hyper-V requirements and processor features Memory configuration Non-Uniform Memory Access (NUMA) architecture (For more resources related to this topic, see here.) Hypervisor architecture If you've used Microsoft Virtual Server or Virtual PC, and then moved to Hyper-V, I'm almost sure that your first impression was: "Wow, this is much faster than Virtual Server". You are right. And there is a reason why Hyper-V performance is much better than Virtual Server or Virtual PC. It's all about the architecture. There are two types of Hypervisor architectures. Hypervisor Type 1, like Hyper-V and ESXi from VMware, and Hypervisor Type 2, like Virtual Server, Virtual PC, VMware Workstation, and others. The objective of the Hypervisor is to execute, manage and control the operation of the VM on a given hardware. For that reason, the Hypervisor is also called Virtual Machine Monitor (VMM). The main difference between these Hypervisor types is the way they operate on the host machine and its operating systems. As Hyper-V is a Type 1 Hypervisor, we will cover Type 2 first, so we can detail Type 1 and its benefits later. Type 1 and Type 2 Hypervisors Hypervisor Type 2, also known as hosted, is an implementation of the Hypervisor over and above the OS installed on the host machine. With that, the OS will impose some limitations to the Hypervisor to operate, and these limitations are going to reflect on the performance of the VM. To understand that, let me explain how a process is placed on the processor: the processor has what we call Rings on which the processes are placed, based on prioritization. The main Rings are 0 and 3. Kernel processes are placed on Ring 0 as they are vital to the OS. Application processes are placed on Ring 3, and, as a result, they will have less priority when compared to Ring 0. The issue on Hypervisors Type 2 is that it will be considered an application, and will run on Ring 3. Let's have a look at it: As you can see from the preceding diagram, the hypervisor has an additional layer to access the hardware. Now, let's compare it with Hypervisor Type 1: The impact is immediate. As you can see, Hypervisor Type 1 has total control of the underlying hardware. In fact, when you enable Virtualization Assistance (hardware-assisted virtualization) at the server BIOS, you are enabling what we call Ring -1, or Ring decompression, on the processor and the Hypervisor will run on this Ring. The question you might have is "And what about the host OS?" If you install the Hyper-V role on a Windows Server for the first time, you may note that after installation, the server will restart. But, if you're really paying attention, you will note that the server will actually reboot twice. This behavior is expected, and the reason it will happen is because the OS is not only installing and enabling Hyper-V bits, but also changing its architecture to the Type 1 Hypervisor. In this mode, the host OS will operate in the same way a VM does, on top of the Hypervisor, but on what we call parent partition. The parent partition will play a key role as the boot partition and in supporting the child partitions, or guest OS, where the VMs are running. The main reason for this partition model is the key attribute of a Hypervisor: isolation. For Microsoft Hyper-V Server you don't have to install the Hyper-V role, as it will be installed when you install the OS, so you won't be able to see the server booting twice. With isolation, you can ensure that a given VM will never have access to another VM. That means that if you have a compromised VM, with isolation, the VM will never infect another VM or the host OS. The only way a VM can access another VM is through the network, like all other devices in your network. Actually, the same is true for the host OS. This is one of the reasons why you need an antivirus for the host and the VMs, but this will be discussed later. The major difference between Type 1 and Type 2 now is that kernel processes from both host OS and VM OS will run on Ring 0. Application processes from both host OS and VM OS will run on Ring 3. However, there is one piece left. The question now is "What about device drivers?" Microkernel and Monolithic Type 1 Hypervisors Have you tried to install Hyper-V on a laptop? What about an all-in-one device? A PC? A server? An x64 based tablet? They all worked, right? And they're supposed to work. As Hyper-V is a Microkernel Type 1 Hypervisor, all the device drivers are hosted on the parent partition. A Monolithic Type 1 Hypervisor hosts its drivers on the Hypervisor itself. VMware ESXi works this way. That's why you should never use a standard ESXi media to install an ESXi host. The hardware manufacturer will provide you with an appropriate media with the correct drivers for the specific hardware. The main advantage of the Monolithic Type 1 Hypervisor is that, as it always has the correct driver installed, you will never have a performance issue due to an incorrect driver. On the other hand, you won't be able to install this on any device. The Microkernel Type 1 Hypervisor, on the other hand, hosts its drivers on the parent partition. That means that if you installed the host OS on a device, and the drivers are working, the Hypervisor, and in this case Hyper-V, will work just fine. There are other hardware requirements. These will be discussed later in this article. The other side of this is that if you use a generic driver, or a wrong version of it, you may have performance issues, or even driver malfunction. What you have to keep in mind here is that Microsoft does not certify drivers for Hyper-V. Device drivers are always certified for Windows Server. If the driver is certified for Windows Server, it is also certified for Hyper-V. But you always have to ensure the use of correct driver for a given hardware. Let's take a better look at how Hyper-V works as a Microkernel Type 1 Hypervisor: As you can see from the preceding diagram, there are multiple components to ensure that the VM will run perfectly. However, the major component is the Integration Components (IC), also called Integration Services. The IC is a set of tools that you should install or upgrade on the VM, so that the VM OS will be able to detect the virtualization stack and run as a regular OS on a given hardware. To understand this more clearly, let's see how an application accesses the hardware and understand all the processes behind it. When the application tries to send a request to the hardware, the kernel is responsible for interpreting this call. As this OS is running on an Enlightened Child Partition (Means that IC is installed), the Kernel will send this call to the Virtual Service Client (VSC) that operates as a synthetic device driver. The VSC is responsible for communicating with the Virtual Service Provider (VSP) on the parent partition, through VMBus, so the VSC can use the hardware resource. The VMBus will then be able to communicate with the hardware for the VM. The VMBus, a channel-based communication, is actually responsible for communicating with the parent partition and hardware. For the VMBus to access the hardware, it will communicate directly with a component on the Hypervisor called hypercalls. These hypercalls are then redirected to the hardware. However, only the parent partition can actually access the physical processor and memory. The child partitions access a virtual view of these components that are translated on the guest and the host partitions. New processors have a feature called Second Level Address Translation (SLAT) or Nested Paging. This feature is extremely important on high performance VMs and hosts, as it helps reduce the overhead of the virtual to physical memory and processor translation. On Windows 8, SLAT is a requirement for Hyper-V. It is important to note that Enlightened Child Partitions, or partitions with IC, can be Windows or Linux OS. If the child partitions have a Linux OS, the name of the component is Linux Integration Services (LIS), but the operation is actually the same. Another important fact regarding ICs is that they are already present on Windows Server 2008 or later. But, if you are running a newer version of Hyper-V, you have to upgrade the IC version on the VM OS. For example, if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012 R2, you probably don't have to worry about it. But if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012, then you have to upgrade the IC on the VM to match the parent partition version. Running guest OS Windows Server 2012 R2 on a VM on top of Hyper-V 2012 is not recommended. For Linux guest OS, the process is the same. Linux kernel version 3 or later already have LIS installed. If you are running an old version of Linux, you should verify the correct LIS version of your OS. To confirm the Linux and LIS versions, you can refer to an article at http://technet.microsoft.com/library/dn531030.aspx. Another situation is when the guest OS does not support IC or LIS, or an Unenlightened Child Partition. In this case, the guest OS and its kernel will not be able to run as an Enlightened Child Partition. As the VMBus is not present in this case, the utilization of hardware will be made by emulation and performance will be degraded. This only happens with old versions of Windows and Linux, like Windows 2000 Server, Windows NT, and CentOS 5.8 or earlier, or in case that the guest OS does not have or support IC. Now that you understand how the Hyper-V architecture works, you may be thinking "Okay, so for all of this to work, what are the requirements?" Hyper-V requirements and processor features At this point, you can see that there is a lot of effort for putting all of this to work. In fact, this architecture is only possible because hardware and software companies worked together in the past. The main goal of both type of companies was to enable virtualization of operating systems without changing them. Intel and AMD created, each one with its own implementation, a processor feature called virtualization assistance so that the Hypervisor could run on Ring 0, as explained before. But this is just the first requirement. There are other requirement as well, which are as follows: Virtualization assistance (also known as Hardware-assisted virtualization): This feature was created to remove the necessity of changing the OS for virtualizing it. On Intel processors, it is known as Intel VT-x. All recent processor families support this feature, including Core i3, Core i5, and Core i7. The complete list of processors and features can be found at http://ark.intel.com/Products/VirtualizationTechnology. You can also use this tool to check if your processor meets this requirement which can be downloaded at: https://downloadcenter.intel.com/Detail_Desc.aspx?ProductID=1881&DwnldID=7838. On AMD Processors, this technology is known as AMD-V. Like Intel, all recent processor families support this feature. AMD provides a tool to check processor compatibility that can be downloaded at http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization. Data Execution Prevention (DEP): This is a security feature that marks memory pages as either executable or nonexecutable. For Hyper-V to run, this option must be enabled on the System BIOS. For an Intel-based processor, this feature is called Execute Disable bit (Intel XD bit) and No Execute Bit (AMD NX bit). This configuration will vary from one System BIOS to another. Check with your hardware vendor how to enable it on System BIOS. x64 (64-bit) based processor: This processor feature uses a 64-bit memory address. Although you may find that all new processors are x64, you might want to check if this is true before starting your implementation. The compatibility checkers above, from Intel and AMD, will show you if your processor is x64. Second Level Address Translation (SLAT): As discussed before, SLAT is not a requirement for Hyper-V to work. This feature provides much more performance on the VMs as it removes the need for translating physical and virtual pages of memory. It is highly recommended to have the SLAT feature on the processor ait provides more performance on high performance systems. As also discussed before, SLAT is a requirement if you want to use Hyper-V on Windows 8 or 8.1. To check if your processor has the SLAT feature, use the Sysinternals tool—Coreinfo— that can be downloaded at http://technet.microsoft.com/en-us/sysinternals/cc835722.aspx. There are some specific processor features that are not used exclusively for virtualization. But when the VM is initiated, it will use these specific features from the processor. If the VM is initiated and these features are allocated on the guest OS, you can't simply remove them. This is a problem if you are going to Live Migrate this VM from a host to another host; if these specific features are not available, you won't be able to perform the operation. At this moment, you have to understand that Live Migration moves a powered-on VM from one host to another. If you try to Live Migrate a VM between hosts with different processor types, you may be presented with an error. Live Migration is only permitted between the same processor vendor: Intel-Intel or AMD-AMD. Intel-AMD Live Migration is not allowed under any circumstance. If the processor is the same on both hosts, Live Migration and Share Nothing Live Migration will work without problems. But even within the same vendor, there can be different processor families. In this case, you can remove these specific features from the Virtual Processor presented to the VM. To do that, open Hyper-V Manager | Settings... | Processor | Processor Compatibility. Mark the Migrate to a physical computer with a different processor version option. This option is only available if the VM is powered off. Keep in mind that enabling this option will remove processor-specific features for the VM. If you are going to run an application that requires these features, they will not be available and the application may not run. Now that you have checked all the requirements, you can start planning your server for virtualization with Hyper-V. This is true from the perspective that you understand how Hyper-V works and what are the requirements for it to work. But there is another important subject that you should pay attention to when planning your server: memory. Memory configuration I believe you have heard this one before "The application server is running under performance". In the virtualization world, there is an obvious answer to it: give more virtual hardware to the VM. Although it seems to be the logical solution, the real effect can be totally opposite. During the early days, when servers had just a few sockets, processors, and cores, a single channel made the communication between logical processors and memory. But server hardware has evolved, and today, we have servers with 256 logical processors and 4 TB of RAM. To provide better communication between these components, a new concept emerged. Modern servers with multiple logical processors and high amount of memory use a new design called Non-Uniform Memory Access (NUMA) architecture. Non-Uniform Memory Access (NUMA) architecture NUMA is a memory design that consists of allocating memory to a given node, or a cluster of memory and logical processors. Accessing memory from a processor inside the node is notably faster than accessing memory from another node. If a processor has to access memory from another node, the performance of the process performing the operation will be affected. Basically, to solve this equation you have to ensure that the process inside the guest VM is aware of the NUMA node and is able to use the best available option: When you create a virtual machine, you decide how many virtual processors and how much virtual RAM this VM will have. Usually, you assign the amount of RAM that the application will need to run and meet the expected performance. For example, you may ask a software vendor on the application requirements and this software vendor will say that the application would be using at least 8 GB of RAM. Suppose you have a server with 16 GB of RAM. What you don't know is that this server has four NUMA nodes. To be able to know how much memory each NUMA node has, you must divide the total amount of RAM installed on the server by the number of NUMA nodes on the system. The result will be the amount of RAM of each NUMA node. In this case, each NUMA node has a total of 4 GB of RAM. Following the instructions of the software vendor, you create a VM with 8 GB of RAM. The Hyper-V standard configuration is to allow NUMA spanning, so you will be able to create the VM and start it. Hyper-V will accommodate 4 GB of RAM on two NUMA nodes. This NUMA spanning configuration means that a processor can access the memory on another NUMA node. As mentioned earlier, this will have an impact on the performance if the application is not aware of it. On Hyper-V, prior to the 2012 version, the guest OS was not informed about the NUMA configuration. Basically, in this case, the guest OS would see one NUMA node with 8 GB of RAM, and the allocation of memory would be made without NUMA restrictions, impacting the final performance of the application. Hyper-V 2012 and 2012 R2 have the same feature—the guest OS will see the virtual NUMA (vNUMA) presented to the child partition. With this feature, the guest OS and/or the application can make a better choice on where to allocate memory for each process running on this VM. NUMA is not a virtualization technology. In fact, it has been used for a long time, and even applications like SQL Server 2005 already used NUMA to better allocate the memory that its processes are using. Prior to Hyper-V 2012, if you wanted to avoid this behavior, you had two choices: Create the VM and allocate the maximum vRAM of a single NUMA node for it, as Hyper-V will always try to allocate the memory inside of a single NUMA node. In the above case, the VM should not have more than 4 GB of vRAM. But for this configuration to really work, you should also follow the next choice. Disable NUMA Spanning on Hyper-V. With this configuration disabled, you will not be able to run a VM if the memory configuration exceeds a single NUMA node. To do this, you should clear the Allow virtual machines to span physical NUMA nodes checkbox on Hyper-V Manager | Hyper-V Settings... | NUMA Spanning. Keep in mind that disabling this option will prevent you from running a VM if no nodes are available. You should also remember that even with Hyper-V 2012, if you create a VM with 8 GB of RAM using two NUMA nodes, the application on top of the guest OS (and the guest OS) must understand the NUMA topology. If the application and/or guest OS are not NUMA aware, vNUMA will not have effect and the application can still have performance issues. At this point you are probably asking yourself "How do I know how many NUMA nodes I have on my server?" This was harder to find in the previous versions of Windows Server and Hyper-V Server. In versions prior to 2012, you should open the Performance Monitor and check the available counters in Hyper-V VM Vid NUMA Node. The number of instances represents the number of NUMA Nodes. In Hyper-V 2012, you can check the settings for any VM. Under the Processor tab, there is a new feature available for NUMA. Let's have a look at this screen to understand what it represents: In Configuration, you can easily confirm how many NUMA nodes the host running this VM has. In the case above, the server has only 1 NUMA node. This means that all memory will be allocated close to the processor. Multiple NUMA nodes are usually present on servers with high amount of logical processors and memory. In the NUMA topology section, you can ensure that this VM will always run with the informed configuration. This is presented to you because of a new Hyper-V 2012 feature called Share Nothing Live Migration, which will be explained in detail later. This feature allows you to move a VM from one host to another without turning the VM off, with no cluster and no shared storage. As you can move the VM turned on, you might want to force the processor and memory configuration, based on the hardware of your worst server, ensuring that your VM will always meet your performance expectations. The Use Hardware Topology button will apply the hardware topology in case you moved the VM to another host or in case you changed the configuration and you want to apply the default configuration again. To summarize, if you want to make sure that your VM will not have performance problems, you should check how many NUMA nodes your server has and divide the total amount of memory by it; the result is the total memory on each node. Creating a VM with more memory than a single node will make Hyper-V present a vNUMA to the guest OS. Ensuring that the guest OS and applications are NUMA aware is also important, so that the guest OS and application can use this information to allocate memory for a process on the correct node. NUMA is important to ensure that you will not have problems because of host configuration and misconfiguration on the VM. But, in some cases, even when planning the VM size, you will come to a moment when the VM memory is stressed. In these cases, Hyper-V can help with another feature called Dynamic Memory. Summary In this we learned about the Hypervisor architecture and different Hypervisor types. We explored in brief about Microkernel and Monolithic Type 1 Hypervisors. In addition to this, this article also explains the Hyper-V requirements and processor features, Memory configuration and the NUMA architecture. Resources for Article: Further resources on this subject: Planning a Compliance Program in Microsoft System Center 2012 [Article] So, what is Microsoft © Hyper-V server 2008 R2? [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 6502

article-image-installing-openstack-swift
Packt
04 Jun 2015
10 min read
Save for later

Installing OpenStack Swift

Packt
04 Jun 2015
10 min read
In this article by Amar Kapadia, Sreedhar Varma, and Kris Rajana, authors of the book OpenStack Object Storage (Swift) Essentials, we will see how IT administrators can install OpenStack Swift. The version discussed here is the Juno release of OpenStack. Installation of Swift has several steps and requires careful planning before beginning the process. A simple installation consists of installing all Swift components on a single node, and a complex installation consists of installing Swift on several proxy server nodes and storage server nodes. The number of storage nodes can be in the order of thousands across multiple zones and regions. Depending on your installation, you need to decide on the number of proxy server nodes and storage server nodes that you will configure. This article demonstrates a manual installation process; advanced users may want to use utilities such as Puppet or Chef to simplify the process. This article walks you through an OpenStack Swift cluster installation that contains one proxy server and five storage servers. (For more resources related to this topic, see here.) Hardware planning This section describes the various hardware components involved in the setup. Since Swift deals with object storage, disks are going to be a major part of hardware planning. The size and number of disks required should be calculated based on your requirements. Networking is also an important component, where factors such as a public or private network and a separate network for communication between storage servers need to be planned. Network throughput of at least 1 GB per second is suggested, while 10 GB per second is recommended. The servers we set up as proxy and storage servers are dual quad-core servers with 12 GB of RAM. In our setup, we have a total of 15 x 2 TB disks for Swift storage; this gives us a total size of 30 TB. However, with in-built replication (with a default replica count of 3), Swift maintains three copies of the same data. Therefore, the effective capacity for storing files and objects is approximately 10 TB, taking filesystem overhead into consideration. This is further reduced due to less than 100 percent utilization. The following figure depicts the nodes of our Swift cluster configuration: The storage servers have container, object, and account services running in them. Server setup and network configuration All the servers are installed with the Ubuntu server operating system (64-bit LTS version 14.04). You'll need to configure three networks, which are as follows: Public network: The proxy server connects to this network. This network provides public access to the API endpoints within the proxy server. Storage network: This is a private network and it is not accessible to the outside world. All the storage servers and the proxy server will connect to this network. Communication between the proxy server and the storage servers and communication between the storage servers take place within this network. In our configuration, the IP addresses assigned in this network are 172.168.10.0 and 172.168.10.99. Replication network: This is also a private network that is not accessible to the outside world. It is dedicated to replication traffic, and only storage servers connect to it. All replication-related communication between storage servers takes place within this network. In our configuration, the IP addresses assigned in this network are 172.168.9.0 and 172.168.9.99. This network is optional, and if it is set up, the traffic on it needs to be monitored closely. Pre-installation steps In order for various servers to communicate easily, edit the /etc/hosts file and add the host names of each server in it. This has to be done on all the nodes. The following screenshot shows an example of the contents of the /etc/hosts file of the proxy server node: Install the Network Time Protocol (NTP) service on the proxy server node and storage server nodes. This helps all the nodes to synchronize their services effectively without any clock delays. The pre-installation steps to be performed are as follows: Run the following command to install the NTP service: # apt-get install ntp Configure the proxy server node to be the reference server for the storage server nodes to set their time from the proxy server node. Make sure that the following line is present in /etc/ntp.conf for NTP configuration in the proxy server node: server ntp.ubuntu.com For NTP configuration in the storage server nodes, add the following line to /etc/ntp.conf. Comment out the remaining lines with server addresses such as 0.ubuntu.pool.ntp.org, 1.ubuntu.pool.ntp.org, 2.ubuntu.pool.ntp.org, and 3.ubuntu.pool.ntp.org: # server 0.ubuntu.pool.ntp.org# server 1.ubuntu.pool.ntp.org# server 2.ubuntu.pool.ntp.org# server 3.ubuntu.pool.ntp.orgserver s-swift-proxy Restart the NTP service on each server with the following command: # service ntp restart Downloading and installing Swift The Ubuntu Cloud Archive is a special repository that provides users with the ability to install new releases of OpenStack. The steps required to download and install Swift are as follows: Enable the capability to install new releases of OpenStack, and install the latest version of Swift on each node using the following commands. The second command shown here creates a file named cloudarchive-juno.list in /etc/apt/sources.list.d, whose content is "deb http://ubuntu-cloud.archieve.canonical.com/ubuntu": Now, update the OS using the following command: # apt-get update && apt-get dist-upgrade On all the Swift nodes, we will install the prerequisite software and services using this command: # apt-get install swift rsync memcached python-netifaces python-xattr python-memcache Next, we create a Swift folder under /etc and give users the permission to access this folder, using the following commands: # mkdir –p /etc/swift/# chown –R swift:swift /etc/swift Download the /etc/swift/swift.conf file from GitHub using this command: # curl –o /etc/swift/swift.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample Modify the /etc/swift/swift.conf file and add a variable called swift_hash_path_suffix in the swift-hash section. We then create a unique hash string using # python –c "from uuid import uuid4; print uuid4()" or # openssl rand –hex 10, and assign it to this variable, as shown in the following configuration option: We then add another variable called swift_hash_path_prefix to the swift-hash section, and assign to it another hash string created using the method described in the preceding step. These strings will be used in the hashing process to determine the mappings in the ring. The swift.conf file should be identical on all the nodes in the cluster. Setting up storage server nodes This section explains additional steps to set up the storage server nodes, which will contain the object, container, and account services. Installing services The first step required to set up the storage server node is installing services. Let's look at the steps involved: On each storage server node, install the packages for swift-account services, swift-container services, swift-object services, and xfsprogs (XFS Filesystem) using this command: # apt-get install swift-account swift-container swift-object xfsprogs Download the account-server.conf, container-server.conf, and object-server.conf samples from GitHub, using the following commands: # curl –o /etc/swift/account-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample# curl –o /etc/swift/container-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample# curl –o /etc/swift/object-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample Edit the /etc/swift/account-server.conf file with the following section: Edit the /etc/swift/container-server.conf file with this section: Edit the /etc/swift/object-server.conf file with the following section: Formatting and mounting hard disks On each storage server node, we need to identify the hard disks that will be used to store the data. We will then format the hard disks and mount them on a directory, which Swift will then use to store data. We will not create any RAID levels or subpartitions on these hard disks because they are not necessary for Swift. They will be used as entire disks. The operating system will be installed on separate disks, which will be RAID configured. First, identify the hard disks that are going to be used for storage and format them. In our storage server, we have identified sdb, sdc, and sdd to be used for storage. We will perform the following operations on sdb. These four steps should be repeated for sdc and sdd as well: Carry out the partitioning for sdb and create the filesystem using this command: # fdisk /dev/sdb# mkfs.xfs /dev/sdb1 Then let's create a directory in /srv/node/sdb1 that will be used to mount the filesystem. Give the permission to the swift user to access this directory. These operations can be performed using the following commands: # mkdir –p /srv/node/sdb1# chown –R swift:swift /srv/node/sdb1 We set up an entry in fstab for the sdb1 partition in the sdb hard disk, as follows. This will automatically mount sdb1 on /srv/node/sdb1 upon every boot. Add the following command line to the /etc/fstab file: /dev/sdb1 /srv/node/sdb1 xfsnoatime,nodiratime,nobarrier,logbufs=8 0 2 Mount sdb1 on /srv/node/sdb1 using the following command: # mount /srv/node/sdb1 RSYNC and RSYNCD In order for Swift to perform the replication of data, we need to configure rsync by configuring rsyncd.conf. This is done by performing the following steps: Create the rsyncd.conf file in the /etc folder with the following content: # vi /etc/rsyncd.conf We are setting up synchronization within the network by including the following lines in the configuration file: 172.168.9.52 is the IP address that is on the replication network for this storage server. Use the appropriate replication network IP addresses for the corresponding storage servers. We then have to edit the /etc/default/rsync file and set RSYNC_ENABLE to true using the following configuration option: RSYNC_ENABLE=true Next, we restart the rsync service using this command: # service rsync restart Then we create the swift, recon, and cache directories using the following commands, and then set its permissions: # mkdir -p /var/cache/swift# mkdir -p /var/swift/recon Setting permissions is done using these commands: # chown -R swift:swift /var/cache/swift# chown -R swift:swift /var/swift/recon Repeat these steps on every storage server. Setting up the proxy server node This section explains the steps required to set up the proxy server node, which are as follows: Install the following services only on the proxy server node: # apt-get install python-swiftclient python-keystoneclientpython-keystonemiddleware swift-proxy Swift doesn't support HTTPS. OpenSSL has already been installed as part of the operating system installation to support HTTPS. We are going to use the OpenStack Keystone service for authentication. In order to set up the proxy-server.conf file for this, we download the configuration file from the following link and edit it: https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample# vi /etc/swift/proxy-server.conf The proxy-server.conf file should be edited to get the correct auth_host, admin_token, admin_tenant_name, admin_user, and admin_password values: admin_token = 01d8b673-9ebb-41d2-968a-d2a85daa1324admin_tenant_name = adminadmin_user = adminadmin_password = changeme Next, we create a keystone-signing directory and give permissions to the swift user using the following commands: # mkdir -p /home/swift/keystone-signing# mkdir -R swift:swift /home/swift/keystone-signing Summary In this article, you learned how to install and set up the OpenStack Swift service to provide object storage, and install and set up the Keystone service to provide authentication for users to access the Swift object storage. Resources for Article: Further resources on this subject: Troubleshooting in OpenStack Cloud Computing [Article] Using OpenStack Swift [Article] Playing with Swift [Article]
Read more
  • 0
  • 0
  • 15975

article-image-data-analysis-using-r
Packt
04 Jun 2015
17 min read
Save for later

Data Analysis Using R

Packt
04 Jun 2015
17 min read
In this article by Viswa Viswanathan and Shanthi Viswanathan, the authors of the book R Data Analysis Cookbook, we discover how R can be used in various ways such as comparison, classification, applying different functions, and so on. We will cover the following recipes: Creating charts that facilitate comparisons Building, plotting, and evaluating – classification trees Using time series objects Applying functions to subsets of a vector (For more resources related to this topic, see here.) Creating charts that facilitate comparisons In large datasets, we often gain good insights by examining how different segments behave. The similarities and differences can reveal interesting patterns. This recipe shows how to create graphs that enable such comparisons. Getting ready If you have not already done so, download the code files and save the daily-bike-rentals.csv file in your R working directory. Read the data into R using the following command: > bike <- read.csv("daily-bike-rentals.csv") > bike$season <- factor(bike$season, levels = c(1,2,3,4),   labels = c("Spring", "Summer", "Fall", "Winter")) > attach(bike) How to do it... We base this recipe on the task of generating histograms to facilitate the comparison of bike rentals by season. Using base plotting system We first look at how to generate histograms of the count of daily bike rentals by season using R's base plotting system: Set up a 2 X 2 grid for plotting histograms for the four seasons: > par(mfrow = c(2,2)) Extract data for the seasons: > spring <- subset(bike, season == "Spring")$cnt > summer <- subset(bike, season == "Summer")$cnt > fall <- subset(bike, season == "Fall")$cnt > winter <- subset(bike, season == "Winter")$cnt Plot the histogram and density for each season: > hist(spring, prob=TRUE,   xlab = "Spring daily rentals", main = "") > lines(density(spring)) >  > hist(summer, prob=TRUE,   xlab = "Summer daily rentals", main = "") > lines(density(summer)) >  > hist(fall, prob=TRUE,   xlab = "Fall daily rentals", main = "") > lines(density(fall)) >  > hist(winter, prob=TRUE,   xlab = "Winter daily rentals", main = "") > lines(density(winter)) You get the following output that facilitates comparisons across the seasons: Using ggplot2 We can achieve much of the preceding results in a single command: > qplot(cnt, data = bike) + facet_wrap(~ season, nrow=2) +   geom_histogram(fill = "blue") You can also combine all four into a single histogram and show the seasonal differences through coloring: > qplot(cnt, data = bike, fill = season) How it works... When you plot a single variable with qplot, you get a histogram by default. Adding facet enables you to generate one histogram per level of the chosen facet. By default, the four histograms will be arranged in a single row. Use facet_wrap to change this. There's more... You can use ggplot2 to generate comparative boxplots as well. Creating boxplots with ggplot2 Instead of the default histogram, you can get a boxplot with either of the following two approaches: > qplot(season, cnt, data = bike, geom = c("boxplot"), fill = season) >  > ggplot(bike, aes(x = season, y = cnt)) + geom_boxplot() The preceding code produces the following output: The second line of the preceding code produces the following plot: Building, plotting, and evaluating – classification trees You can use a couple of R packages to build classification trees. Under the hood, they all do the same thing. Getting ready If you do not already have the rpart, rpart.plot, and caret packages, install them now. Download the data files and place the banknote-authentication.csv file in your R working directory. How to do it... This recipe shows you how you can use the rpart package to build classification trees and the rpart.plot package to generate nice-looking tree diagrams: Load the rpart, rpart.plot, and caret packages: > library(rpart) > library(rpart.plot) > library(caret) Read the data: > bn <- read.csv("banknote-authentication.csv") Create data partitions. We need two partitions—training and validation. Rather than copying the data into the partitions, we will just keep the indices of the cases that represent the training cases and subset as and when needed: > set.seed(1000) > train.idx <- createDataPartition(bn$class, p = 0.7, list = FALSE) Build the tree: > mod <- rpart(class ~ ., data = bn[train.idx, ], method = "class", control = rpart.control(minsplit = 20, cp = 0.01)) View the text output (your result could differ if you did not set the random seed as in step 3): > mod n= 961   node), split, n, loss, yval, (yprob)      * denotes terminal node   1) root 961 423 0 (0.55983351 0.44016649)    2) variance>=0.321235 511 52 0 (0.89823875 0.10176125)      4) curtosis>=-4.3856 482 29 0 (0.93983402 0.06016598)        8) variance>=0.92009 413 10 0 (0.97578692 0.02421308) *        9) variance< 0.92009 69 19 0 (0.72463768 0.27536232)        18) entropy< -0.167685 52   6 0 (0.88461538 0.11538462) *        19) entropy>=-0.167685 17   4 1 (0.23529412 0.76470588) *      5) curtosis< -4.3856 29   6 1 (0.20689655 0.79310345)      10) variance>=2.3098 7   1 0 (0.85714286 0.14285714) *      11) variance< 2.3098 22   0 1 (0.00000000 1.00000000) *    3) variance< 0.321235 450 79 1 (0.17555556 0.82444444)      6) skew>=6.83375 76 18 0 (0.76315789 0.23684211)      12) variance>=-3.4449 57   0 0 (1.00000000 0.00000000) *      13) variance< -3.4449 19   1 1 (0.05263158 0.94736842) *      7) skew< 6.83375 374 21 1 (0.05614973 0.94385027)      14) curtosis>=6.21865 106 16 1 (0.15094340 0.84905660)        28) skew>=-3.16705 16   0 0 (1.00000000 0.00000000) *       29) skew< -3.16705 90   0 1 (0.00000000 1.00000000) *      15) curtosis< 6.21865 268   5 1 (0.01865672 0.98134328) * Generate a diagram of the tree (your tree might differ if you did not set the random seed as in step 3): > prp(mod, type = 2, extra = 104, nn = TRUE, fallen.leaves = TRUE, faclen = 4, varlen = 8, shadow.col = "gray") The following output is obtained as a result of the preceding command: Prune the tree: > # First see the cptable > # !!Note!!: Your table can be different because of the > # random aspect in cross-validation > mod$cptable            CP nsplit rel error   xerror       xstd 1 0.69030733     0 1.00000000 1.0000000 0.03637971 2 0.09456265     1 0.30969267 0.3262411 0.02570025 3 0.04018913     2 0.21513002 0.2387707 0.02247542 4 0.01891253     4 0.13475177 0.1607565 0.01879222 5 0.01182033     6 0.09692671 0.1347518 0.01731090 6 0.01063830     7 0.08510638 0.1323877 0.01716786 7 0.01000000     9 0.06382979 0.1276596 0.01687712   > # Choose CP value as the highest value whose > # xerror is not greater than minimum xerror + xstd > # With the above data that happens to be > # the fifth one, 0.01182033 > # Your values could be different because of random > # sampling > mod.pruned = prune(mod, mod$cptable[5, "CP"]) View the pruned tree (your tree will look different): > prp(mod.pruned, type = 2, extra = 104, nn = TRUE, fallen.leaves = TRUE, faclen = 4, varlen = 8, shadow.col = "gray") Use the pruned model to predict for a validation partition (note the minus sign before train.idx to consider the cases in the validation partition): > pred.pruned <- predict(mod, bn[-train.idx,], type = "class") Generate the error/classification-confusion matrix: > table(bn[-train.idx,]$class, pred.pruned, dnn = c("Actual", "Predicted"))      Predicted Actual   0   1      0 213 11      1 11 176 How it works... Steps 1 to 3 load the packages, read the data, and identify the cases in the training partition, respectively. In step 3, we set the random seed so that your results should match those that we display. Step 4 builds the classification tree model: > mod <- rpart(class ~ ., data = bn[train.idx, ], method = "class", control = rpart.control(minsplit = 20, cp = 0.01)) The rpart() function builds the tree model based on the following:   Formula specifying the dependent and independent variables   Dataset to use   A specification through method="class" that we want to build a classification tree (as opposed to a regression tree)   Control parameters specified through the control = rpart.control() setting; here we have indicated that the tree should only consider nodes with at least 20 cases for splitting and use the complexity parameter value of 0.01—these two values represent the defaults and we have included these just for illustration Step 5 produces a textual display of the results. Step 6 uses the prp() function of the rpart.plot package to produce a nice-looking plot of the tree: > prp(mod, type = 2, extra = 104, nn = TRUE, fallen.leaves = TRUE, faclen = 4, varlen = 8, shadow.col = "gray")   use type=2 to get a plot with every node labeled and with the split label below the node   use extra=4 to display the probability of each class in the node (conditioned on the node and hence summing to 1); add 100 (hence extra=104) to display the number of cases in the node as a percentage of the total number of cases   use nn = TRUE to display the node numbers; the root node is node number 1 and node n has child nodes numbered 2n and 2n+1   use fallen.leaves=TRUE to display all leaf nodes at the bottom of the graph   use faclen to abbreviate class names in the nodes to a specific maximum length   use varlen to abbreviate variable names   use shadow.col to specify the color of the shadow that each node casts Step 7 prunes the tree to reduce the chance that the model too closely models the training data—that is, to reduce overfitting. Within this step, we first look at the complexity table generated through cross-validation. We then use the table to determine the cutoff complexity level as the largest xerror (cross-validation error) value that is not greater than one standard deviation above the minimum cross-validation error. Steps 8 through 10 display the pruned tree; use the pruned tree to predict the class for the validation partition and then generate the error matrix for the validation partition. There's more... We discuss in the following an important variation on predictions using classification trees. Computing raw probabilities We can generate probabilities in place of classifications by specifying type="prob": > pred.pruned <- predict(mod, bn[-train.idx,], type = "prob") Create the ROC Chart Using the preceding raw probabilities and the class labels, we can generate a ROC chart: > pred <- prediction(pred.pruned[,2], bn[-train.idx,"class"]) > perf <- performance(pred, "tpr", "fpr") > plot(perf) Using time series objects In this recipe, we look at various features to create and plot time-series objects. We will consider data with both a single and multiple time series. Getting ready If you have not already downloaded the data files, do it now and ensure that the files are in your R working directory. How to do it... Read the data. The file has 100 rows and a single column named sales: > s <- read.csv("ts-example.csv") Convert the data to a simplistic time series object without any explicit notion of time: > s.ts <- ts(s) > class(s.ts) [1] "ts" Plot the time series: > plot(s.ts) Create a proper time series object with proper time points: > s.ts.a <- ts(s, start = 2002) > s.ts.a Time Series: Start = 2002 End = 2101 Frequency = 1        sales [1,]   51 [2,]   56 [3,]   37 [4,]   101 [5,]   66 (output truncated) > plot(s.ts.a) > # results show that R treated this as an annual > # time series with 2002 as the starting year The result of the preceding commands is seen in the following graph: To create a monthly time series run the following command: > # Create a monthly time series > s.ts.m <- ts(s, start = c(2002,1), frequency = 12) > s.ts.m        Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2002 51 56 37 101 66 63 45 68 70 107 86 102 2003 90 102 79 95 95 101 128 109 139 119 124 116 2004 106 100 114 133 119 114 125 167 149 165 135 152 2005 155 167 169 192 170 180 175 207 164 204 180 203 2006 215 222 205 202 203 209 200 199 218 221 225 212 2007 250 219 242 241 267 249 253 242 251 279 298 260 2008 269 257 279 273 275 314 288 286 290 288 304 291 2009 314 290 312 319 334 307 315 321 339 348 323 342 2010 340 348 354 291 > plot(s.ts.m) # note x axis on plot The following plot can be seen as a result of the preceding commands: > # Specify frequency = 4 for quarterly data > s.ts.q <- ts(s, start = 2002, frequency = 4) > s.ts.q        Qtr1 Qtr2 Qtr3 Qtr4 2002   51   56   37 101 2003   66   63   45   68 2004   70 107   86 102 2005   90 102   79   95 2006   95 101 128 109 (output truncated) > plot(s.ts.q) Query time series objects (we use the s.ts.m object we created in the previous step): > # When does the series start? > start(s.ts.m) [1] 2002   1 > # When does it end? > end(s.ts.m) [1] 2010   4 > # What is the frequency? > frequency(s.ts.m) [1] 12 Create a time series object with multiple time series. This data file contains US monthly consumer prices for white flour and unleaded gas for the years 1980 through 2014 (downloaded from the website of the US Bureau of Labor Statistics): > prices <- read.csv("prices.csv") > prices.ts <- ts(prices, start=c(1980,1), frequency = 12) Plot a time series object with multiple time series: > plot(prices.ts) The plot in two separate panels appears as follows: > # Plot both series in one panel with suitable legend > plot(prices.ts, plot.type = "single", col = 1:2) > legend("topleft", colnames(prices.ts), col = 1:2, lty = 1) Two series plotted in one panel appear as follow: How it works... Step 1 reads the data. Step 2 uses the ts function to generate a time series object based on the raw data. Step 3 uses the plot function to generate a line plot of the time series. We see that the time axis does not provide much information. Time series objects can represent time in more friendly terms. Step 4 shows how to create time series objects with a better notion of time. It shows how we can treat a data series as an annual, monthly, or quarterly time series. The start and frequency parameters help us to control these data series. Although the time series we provide is just a list of sequential values, in reality our data can have an implicit notion of time attached to it. For example, the data can be annual numbers, monthly numbers, or quarterly ones (or something else, such as 10-second observations of something). Given just the raw numbers (as in our data file, ts-example.csv), the ts function cannot figure out the time aspect and by default assumes no secondary time interval at all. We can use the frequency parameter to tell ts how to interpret the time aspect of the data. The frequency parameter controls how many secondary time intervals there are in one major time interval. If we do not explicitly specify it, by default frequency takes on a value of 1. Thus, the following code treats the data as an annual sequence, starting in 2002: > s.ts.a <- ts(s, start = 2002) The following code, on the other hand, treats the data as a monthly time series, starting in January 2002. If we specify the start parameter as a number, then R treats it as starting at the first subperiod, if any, of the specified start period. When we specify frequency as different from 1, then the start parameter can be a vector such as c(2002,1) to specify the series, the major period, and the subperiod where the series starts. c(2002,1) represent January 2002: > s.ts.m <- ts(s, start = c(2002,1), frequency = 12) Similarly, the following code treats the data as a quarterly sequence, starting in the first quarter of 2002: > s.ts.q <- ts(s, start = 2002, frequency = 4) The frequency values of 12 and 4 have a special meaning—they represent monthly and quarterly time sequences. We can supply start and end, just one of them, or none. If we do not specify either, then R treats the start as 1 and figures out end based on the number of data points. If we supply one, then R figures out the other based on the number of data points. While start and end do not play a role in computations, frequency plays a big role in determining seasonality, which captures periodic fluctuations. If we have some other specialized time series, we can specify the frequency parameter appropriately. Here are two examples:   With measurements taken every 10 minutes and seasonality pegged to the hour, we should specify frequency as 6   With measurements taken every 10 minutes and seasonality pegged to the day, use frequency = 24*6 (6 measurements per hour times 24 hours per day) Step 5 shows the use of the functions start, end, and frequency to query time series objects. Steps 6 and 7 show that R can handle data files that contain multiple time series. Applying functions to subsets of a vector The tapply function applies a function to each partition of the dataset. Hence, when we need to evaluate a function over subsets of a vector defined by a factor, tapply comes in handy. Getting ready Download the files and store the auto-mpg.csv file in your R working directory. Read the data and create factors for the cylinders variable: > auto <- read.csv("auto-mpg.csv", stringsAsFactors=FALSE) > auto$cylinders <- factor(auto$cylinders, levels = c(3,4,5,6,8),   labels = c("3cyl", "4cyl", "5cyl", "6cyl", "8cyl")) How to do it... To apply functions to subsets of a vector, follow these steps: Calculate mean mpg for each cylinder type: > tapply(auto$mpg,auto$cylinders,mean)      3cyl     4cyl     5cyl     6cyl     8cyl 20.55000 29.28676 27.36667 19.98571 14.96311 We can even specify multiple factors as a list. The following example shows only one factor since the out file has only one, but it serves as a template that you can adapt: > tapply(auto$mpg,list(cyl=auto$cylinders),mean)   cyl    3cyl     4cyl     5cyl     6cyl     8cyl 20.55000 29.28676 27.36667 19.98571 14.96311 How it works... In step 1 the mean function is applied to the auto$mpg vector grouped according to the auto$cylinders vector. The grouping factor should be of the same length as the input vector so that each element of the first vector can be associated with a group. The tapply function creates groups of the first argument based on each element's group affiliation as defined by the second argument and passes each group to the user-specified function. Step 2 shows that we can actually group by several factors specified as a list. In this case, tapply applies the function to each unique combination of the specified factors. There's more... The by function is similar to tapply and applies the function to a group of rows in a dataset, but by passing in the entire data frame. The following examples clarify this. Applying a function on groups from a data frame In the following example, we find the correlation between mpg and weight for each cylinder type: > by(auto, auto$cylinders, function(x) cor(x$mpg, x$weight)) auto$cylinders: 3cyl [1] 0.6191685 --------------------------------------------------- auto$cylinders: 4cyl [1] -0.5430774 --------------------------------------------------- auto$cylinders: 5cyl [1] -0.04750808 --------------------------------------------------- auto$cylinders: 6cyl [1] -0.4634435 --------------------------------------------------- auto$cylinders: 8cyl [1] -0.5569099 Summary Being an extensible system, R's functionality is divided across numerous packages with each one exposing large numbers of functions. Even experienced users cannot expect to remember all the details off the top of their head. In this article, we went through a few techniques using which R helps analyze data and visualize the results. Resources for Article: Further resources on this subject: Combining Vector and Raster Datasets [article] Factor variables in R [article] Big Data Analysis (R and Hadoop) [article]
Read more
  • 0
  • 0
  • 3583
article-image-upgrading-vmware-virtual-infrastructure-setups
Packt
04 Jun 2015
13 min read
Save for later

Upgrading VMware Virtual Infrastructure Setups

Packt
04 Jun 2015
13 min read
In this article by Kunal Kumar and Christian Stankowic, authors of the book VMware vSphere Essentials, you will learn how to correctly upgrade VMware virtual infrastructure setups. (For more resources related to this topic, see here.) This article will cover the following topics: Prerequisites and preparations Upgrading vCenter Server Upgrading ESXi hosts Additional steps after upgrading An example scenario Let's start with a realistic scenario that is often found in data centers these days. I assume that your virtual infrastructure consists of components such as: Multiple VMware ESXi hosts Shared storage (NFS or Fibre-channel) VMware vCenter Server and vSphere Update Manager In this example, a cluster consisting of two ESXi hosts (esxi1 and esxi2) is running VMware ESXi 5.5. On a virtual machine (vc1), a Microsoft Windows Server system is running vCenter Server and vSphere Update Manager (vUM) 5.5. This article is written as a step-by-step guide to upgrade these particular vSphere components to the most recent version, which is 6.0. Example scenario consisting of two ESXi hosts with shared storage and vCenter Server Prerequisites and preparations Before we start the upgrade, we need to fulfill the following prerequisites: Ensure ESXi version support by the hardware vendor Gurarantee ESXi version support on used hardware by VMware Create a backup of the ESXi images and vCenter Server First of all, we need to refer to our hardware vendor's support matrix to ensure that our physical hosts running VMware ESXi are supported in the new release. Hardware vendors evaluate their systems before approving upgrades to customers. As an example, Dell offers a comprehensive list for their PowerEdge servers at http://topics-cdn.dell.com/pdf/vmware-esxi-6.x_Reference%20Guide2_en-us.pdf. Here are some additional links for alternative hardware vendors: Hewlett-Packard: http://h17007.www1.hp.com/us/en/enterprise/servers/supportmatrix/vmware.aspx IBM: http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/vmware.html Cisco UCS: http://www.cisco.com/web/techdoc/ucs/interoperability/matrix/matrix.html When using Fibre-channel-based storage systems, you might also need to ensure fulfilling that vendor's support matrix. Please check out your vendor's website or contact support for this information. VMware also offers a comprehensive list of tested hardware setups at http://www.vmware.com/resources/compatibility/pdf/vi_systems_guide.pdf. In their Compatibility Guide portal, VMware enabled customers to browse for particular server systems—this information might be more recent than the aforementioned PDF file. Creating a backup of ESXi Before upgrading our ESXi hosts, we also need to make sure that we have a valid backup. In case things go wrong, we might need this backup to restore the previous ESXi version. For creating a backup of the hard disk ESXi is installed on, there are a plenty of tools in the market that implement image-based backups. One possible solution, which is free, is Clonezilla. Clonezilla is a Linux-based live medium that can easily create backup images of hard disks. To create a backup using Clonezilla, proceed with the following steps: Download the Clonezilla ISO image from their website. Make sure you select the AMD64 architecture and the ISO file format. Enable maintenance mode for the particular ESXi host. Make sure you migrate virtual machines to alternative nodes or power them off. Connect the ISO file to the ESXi host and boot from CD. Also, connect a USB drive to the host. This drive will be used to store the backup. Boot from CD and select Clonezilla live. Wait until the boot process completes. When prompted, select your keyboard layout (for example, en_US.utf8) and select Don't touch keymap. In the Start Clonezilla menu, select Start_Clonezilla and device-image. This mode creates an image of the medium ESXi is running on and stores it in the USB storage. Select local_dev and choose the USB storage connected to the host from the list in the next step. Select a folder for storing the backup (optional). Select Beginner and savedisk to store the entire disk ESXi resides on as an image. Enter a name for the backup. Select the hard disk containing the ESXi installation and proceed. You can also specify whether Clonezilla should check the image after creating it (highly recommended). Afterwards, confirm the backup process. The backup job will start immediately. Once the backup completes, select reboot from the menu to reboot the host. A running backup job in Clonezilla To restore a backup using Clonezilla, perform the following steps after booting the Clonezilla media: Complete steps 1 to 8 from the previous guide. Select Beginner and restoredisk to restore the entire disk. Select the image from the USB storage and the hard drive the image should be restored on. Acknowledge the restore process. Once the restoration completes, select reboot from the menu to reboot the host. For the system running vCenter Server, we can easily create a VM snapshot, or also use Clonezilla if a physical machine is used instead. The upgrade path It is very important to execute the particular upgrade tasks in the following order: Upgrade VMware vCenter Server Upgrade the particular ESXi hosts Reformat or upgrade the VMFS data stores (if applicable) Upgrading additional components, such as distributed virtual switches, or additional appliances The first step is to upgrade vCenter Server. This is necessary to ensure that we are able to manage our ESXi hosts after upgrading them. Newer vCenter Server versions are downward compatible with numerous ESXi versions. To double-check this, we can look up the particular version support by browsing VMware's Product Interoperability Matrix on their website. Click on Solution Interoperability, choose VMware vCenter Server from the drop-down menu, and select the version you want to upgrade to. In our example, we will choose the most recent release, 6.0, and select VMware ESX/ESXi from the Add Platform/Solution drop-down menu. VMware Product Interoperability Matrix for vCenter Server and ESXi vCenter Server 6.0 supports management of VMware ESXi 5.0 and higher. We need to ensure the same support agreement for any other used products, such as these: VMware vSphere Update Manager VMware vCenter Operations (if applicable) VMware vSphere Data Protection In other words, we need to upgrade all additional vSphere and vCenter Server components to ensure full functionality. Upgrading vCenter Server Upgrading vCenter Server is the most crucial step, as this is our central management platform. The upgrade process varies according to the chosen architecture. Upgrading Windows-based vCenter Server installations is quite easy, as the installation supports in-place upgrades. When using the vCenter Server Appliance (vCSA), there is no in-place upgrade; it is necessary to deploy a new vCSA and import the settings from the old installation. This process varies between the particular vCSA versions. For upgrading from vCSA 5.0 or 5.1 to 5.5, VMware offers a comprehensive article at http://kb.vmware.com/kb/2058441. To upgrade vCenter Server 5.x on Windows to 6.0 using the Easy Install method, proceed with the following steps: Mount the vCenter Server 6.x installation media (VMware-VIMSetup-all-6.0.0-xxx.iso) on the server running vCenter Server. Wait until the installation wizard starts; if it doesn't start, double-click on the CD/DVD icon in Windows Explorer. Select vCenter Server for Windows and click on Install to start the installation utility. Accept the End-User License Agreement (EULA). Enter the current vCenter Single-Sign-On password and proceed with the next step. The installation utility begins to execute pre-upgrade checks; this might take some time. If you're running vCenter Server along with Microsoft SQL Server Express Edition, the database will be migrated to VMware vPostgres. Review and change (if necessary) the network ports of your vCenter Server installation. If needed, change the directories for vCenter Server and the Embedded Platform Controller (ESC). Carefully review the upgrade information displayed in the wizard. Also verify that you have created a backup of your system and the database. Then click on Upgrade to start the upgrade. After the upgrade, vSphere Web Client can be used to connect to the upgraded vCenter Server system. Also note that the Microsoft SQL Server Express Edition database is not used anymore. Upgrading ESXi hosts Upgrading ESXi hosts can be done using two methods: Using the installation media from the VMware website vSphere Update Manager If you need to upgrade a large number of ESXi hosts, I recommend that you use vSphere Update Manager to save time, as it can automate the particular steps. For smaller landscapes, using the installation media is easier. For using vUM to upgrade ESXi hosts, VMware offers a guide on their knowledge base at http://kb.vmware.com/kb/1019545. In order to upgrade an ESXi host using the installation media, perform the following steps: First of all, enable maintenance mode for the particular ESXi host. Make sure you migrate the virtual machines to alternative nodes or power them off. Connect the installation media to the ESXi host and boot from CD. Once the setup utility becomes available, press Enter to start the installation wizard. Accept the End-User License Agreement (EULA) by pressing F11. Select the disk containing the current ESXi installation. In the ESXi found dialog, select Upgrade. Review the installation information and press F11 to start the upgrade. After the installation completes, press Enter to reboot the system. After the system has rebooted, it will automatically reconnect to vCenter Server. Select the particular ESXi host to see whether the version has changed. In this example, the ESXi host has been successfully upgraded to version 6.0: Version information of an updated ESXi host running release 6.0 Repeat all of these steps for all the remaining ESXi hosts. Note that running an ESXi cluster with mixed versions should only be a temporary solution. It is not recommended to mix various ESXi releases in production usage, as the various features of ESXi might not perform as expected in mixed clusters. Additional steps After upgrading vCenter Server and our ESXi hosts, there are additional steps that can be done: Reformating or upgrading VMFS data stores Upgrading distributed virtual switches Upgrading virtual machine's hardware versions Upgrading VMFS data stores VMware's VMFS (Virtual Machine Filesystem) is the most used filesystem for shared storage. It can be used along with local storage, iSCSI, or Fibre-channel storage. Particularly, ESX(i) releases support various versions of VMFS. Let's take a look at the major differences:   VMFS 2   VMFS 3   VMFS 5   Supported by ESX 2.x, ESXi 3.x/4.x (read-only) ESX(i) 3.x and higher ESXi 5.x and higher Block size(s) 1, 8, 64, or 256 MB 1, 2, 4, or 8 MB 1 MB (fixed) Maximum file size 1 MB block size: 456 MB 8 MB block size: 2.5 TB 64 MB block size: 28.5 TB 256 MB block size: 64 TB 1 MB block size: 256 MB 2 MB block size: 512 GB 4 MB block size: 1 TB 8 MB block size: 2 TB 62 TB Files per volume Ca. 256 (no directories supported) Ca. 37,720 Ca. 130,690 When migrating from an ESXi version such as 4.x or older, it is possible to upgrade VMFS data stores to version 5. VMFS 2 cannot be upgraded to VMFS 5; it first needs to be upgraded to VMFS 3. To enable the upgrade, a VMFS 2 volume must not have a block size more than 8 MB, as VMFS 3 only supports block sizes up to 8 MB. In comparison with older VMFS versions, VMFS 5 supports larger file sizes and more files per volume. I highly recommend that you reformat VMFS data stores instead of upgrading them, as the upgrade does not change the filesystem's block size. Because of this limitation, you won't benefit from all the new VMFS 5 features after an upgrade. To upgrade a VMFS 3 volume to VMFS 5, perform these steps: Log in to vSphere Web Client. Go to the Storage pane. Click on the data store to upgrade and go to Settings under the Manage tab. Click on Upgrade to VMFS5. Then click on OK to start the upgrade. VMware vNetwork Distributed Switch When using vNetwork Distributed Switches (also often called dvSwitches) it is recommended to perform an upgrade to the latest version. In comparison with vNetwork Standard Switches (also called vSwitches), dvSwitches are created at the vCenter Server level and replicated to all subscribed ESXi hosts. When creating a dvSwitch, the administrator can choose between various dvSwitch versions. After upgrading vCenter Server and the ESXi hosts, additional features can be unlocked by upgrading the dvSwitch. Let's take a look at some commonly used dvSwitch versions:   vDS 5.0   vDS 5.1   vDS 5.5   vDS 6.0   Compatible with ESXi 5.0 and higher ESXi 5.1 and higher ESXi 5.5 and higher ESXi 6.0 Common features Network I/O Control, load-based teaming, traffic shaping, VM port blocking, PVLANs (private VLANs), network vMotion, and port policies Additional features Network resource pools, NetFlow, and port mirroring VDS 5.0 +, management network rollback, network health checks, enhanced port mirroring, and LACP (Link Aggregation Control Protocol) VDS 5.1 +, traffic filtering, and enhanced LACP functionality VDS 5.5 +, multicast snooping, and Network I/O Control version 3 (bandwidth guarantee) It is also possible to use the old version furthermore, as vCenter Server is downward compatible with numerous dvSwitch versions. Upgrading a dvSwitch is a task that cannot be undone. During the upgrade, it is possible that virtual machines will lose their network connectivity for some seconds. After the upgrade, older ESXi hosts will not be able to participate in the distributed switch setup. To upgrade a dvSwitch, perform the following steps: Log in to vSphere Web Client. Go to the Networking pane and select the dvSwitch to upgrade. Lorem..... After upgrading the dvSwitch, you will notice that the version has changed: Version information of a dvSwitch running VDS 6.0 Virtual machine hardware version Every virtual machine is created with a virtual machine hardware version specified (also called VMHW or vHW). A vHW version defines a set of particular limitations and features, such as controller types or network cards. To benefit from the new virtual machine features, it is sufficient to upgrade vHW versions. ESXi hosts support a range of vHW versions, but it is always advisable to use the most recent vHW version. Once a vHW version is upgraded, particular virtual machines cannot be started on older ESXi versions that don't support the vHW version. Let's take a deeper look at some popular vHW versions:   vSphere 4.1   vSphere 5.1   vSphere 5.5   vSphere 6.0   Maximum vHW 7 9 10 11 Virtual CPUs 8 64 128 Virtual RAM 255 GB 1 TB 4 TB vDisk size 2 TB 62 TB SCSI adapters / targets 4/60 SATA adapters / targets Not supported 4/30 Parallel / Serial Ports 3/4 3/32 USB controllers / devices per VM 1/20 (USB 1.x + 2.x) 1/20 (USB 1.x, 2.x + 3.x) The upgrade cannot be undone. Also, it might be necessary to update VMware Tools and the drivers of the operating system running in the virtual machine. Summary In this article we learnt how to correctly upgrade VMware virtual infrastructure setups. If you want to know more about VMware vSphere and virtual infrastructure setups, go ahead and get your copy of Packt Publishing's book VMware vSphere Essentials. Resources for Article: Further resources on this subject: Networking [article] The Design Documentation [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 7692

article-image-events-notifications-and-reporting
Packt
04 Jun 2015
55 min read
Save for later

Events, Notifications, and Reporting

Packt
04 Jun 2015
55 min read
In this article by Martin Wood, the author of the book, Mastering ServiceNow, has discussed about communication which is a key part of any business application. Not only does the boss need to have an updated report by Monday, but your customers and users also want to be kept informed. ServiceNow helps users who want to know what's going on. In this article, we'll explore the functionality available. The platform can notify and provide information to people in a variety of ways: Registering events and creating Scheduled Jobs to automate functionality Sending out informational e-mails when something happens Live dashboards and homepages showing the latest reports and statistics Scheduled reports that help with handover between shifts Capturing information with metrics Presenting a single set of consolidated data with database views (For more resources related to this topic, see here.) Dealing with events Firing an event is a way to tell the platform that something happened. Since ServiceNow is a data-driven system, in many cases, this means that a record has been updated in some way. For instance, maybe a guest has been made a VIP, or has stayed for 20 nights. Several parts of the system may be listening for an event to happen. When it does, they perform an action. One of these actions may be sending an e-mail to thank our guest for their continued business. These days, e-mail notifications don't need to be triggered by events. However, it is an excellent example. When you fire an event, you pass through a GlideRecord object and up to two string parameters. The item receiving this data can then use it as necessary, so if we wanted to send an e-mail confirming a hotel booking, we have those details to hand during processing. Registering events Before an event can be fired, it must be known to the system. We do this by adding it to Event Registry [sysevent_register], which can be accessed by navigating to System Policy > Events > Registry. It's a good idea to check whether there isn't one you can use before you add a new one. An event registration record consists of several fields, but most importantly a string name. An event can be called anything, but by convention it is in a dotted namespace style format. Often, it is prefixed by the application or table name and then by the activity that occurred. Since a GlideRecord object accompanies an event, the table that the record will come from should also be selected. It is also a good idea to describe your event and what will cause it in the Description and Fired by fields. Finally, there is a field that is often left empty, called Queue. This gives us the functionality to categorize events and process them in a specific order or frequency. Firing an event Most often, a script in a Business Rule will notice that something happens and will add an event to the Event [sysevent] queue. This table stores all of the events that have been fired, if it has been processed, and what page the user was on when it happened. As the events come in, the platform deals with them in a first in, first out order by default. It finds everything that is listening for this specific event and executes them. That may be an e-mail notification or a script. By navigating to System Policy > Events > Event Log, you can view the state of an event, when it was added to the queue, and when it was processed. To add an event to the queue, use the eventQueue function of GlideSystem. It accepts four parameters: the name of the event, a GlideRecord object, and two run time parameters. These can be any text strings, but most often are related to the user that caused the event. Sending an e-mail for new reservations Let's create an event that will fire when a Maintenance task has been assigned to one of our teams. Navigate to System Policy > Events > Registry. Click on New and set the following fields:     Event name: maintenance.assigned     Table: Maintenance [u_maintenance] Next, we need to add the event to the Event Queue. This is easily done with a simple Business Rule:     Name: Maintenance assignment events     Table: Maintenance [u_maintenance]     Advanced: <ticked>     When: after Make sure to always fire events after the record has been written to the database. This stops the possibility of firing an event even though another script has aborted the action. Insert: <ticked> Update: <ticked> Filter Conditions: Assignment group – changes Assignment group – is not empty Assigned to – is empty This filter represents when a task is sent to a new group but someone hasn't yet been identified to own the work. Script: gs.eventQueue('maintenance.assigned', current, gs.getUserID(), gs.getUserName()); This script follows the standard convention when firing events—passing the event name, current, which contains the GlideRecord object the Business Rule is working with, and some details about the user who is logged in. We'll pick this event up later and send an e-mail whenever it is fired. There are several events, such as <table_name>.view, that are fired automatically. A very useful one is the login event. Take a look at the Event Log to see what is happening. Scheduling jobs You may be wondering how the platform processes the event queue. What picks them up? How often are they processed? In order to make things happen automatically, ServiceNow has a System Scheduler. Processing the event queue is one job that is done on a repeated basis. ServiceNow can provide extra worker nodes that only process events. These shift the processing of things such as e-mails onto another system, enabling the other application nodes to better service user interactions. To see what is going on, navigate to System Scheduler > Scheduled Jobs > Today's Scheduled Jobs. This is a link to the Schedule Item [sys_trigger] table, a list of everything the system is doing in the background. You will see a job that collects database statistics, another that upgrades the instance (if appropriate), and others that send and receive e-mails or SMS messages. You should also spot one called events process, which deals with the event queue. A Schedule Item has a Next action date and time field. This is when the platform will next run the job. Exactly what will happen is specified through the Job ID field. This is a reference to the Java class in the platform that will actually do the work. The majority of the time, this is RunScriptJob, which will execute some JavaScript code. The Trigger type field specifies how often the job will repeat. Most jobs are run repetitively, with events process set to run every 30 seconds. Others run when the instance is started—perhaps to preload the cache. Another job that is run on a periodic basis is SMTP Sender. Once an e-mail has been generated and placed in the sys_email table, the SMTP Sender job performs the same function as many desktop e-mail clients: it connects to an e-mail server and asks it to deliver the message. It runs every minute by default. This schedule has a direct impact on how quickly our e-mail will be sent out. There may be a delay of up to 30 seconds in generating the e-mail from an event, and a further delay of up to a minute before the e-mail is actually sent. Other jobs may process a particular event queue differently. Events placed into the metric queue will be worked with after 5 seconds. Adding your own jobs The sys_trigger table is a backend data store. It is possible to add your own jobs and edit what is already there, but I don't recommend it. Instead, there is a more appropriate frontend: the Scheduled Job [sysauto] table. The sysauto table is designed to be extended. There are many things that can be automated in ServiceNow, including data imports, sending reports, and creating records, and they each have a table extended from sysauto. Once you create an entry in the sysauto table, the platform creates the appropriate record in the sys_trigger table. This is done through a call in the automation synchronizer Business Rule. Each table extended from sysauto contains fields that are relevant to its automation. For example, a Scheduled Email of Report [sysauto_report] requires e-mail addresses and reports to be specified. Creating events every day Navigate to System Definition > Scheduled Jobs. Unfortunately, the sys_trigger and sysauto tables have very similar module names. Be sure to pick the right one. When you click on New, an interceptor will fire, asking you to choose what you want to automate. Let's write a simple script that will create a maintenance task at the end of a hotel stay, so choose Automatically run a script of your choosing. Our aim is to fire an event for each room that needs cleaning. We'll keep this for midday to give our guests plenty of time to check out. Set the following fields: Name: Clean on end of reservation Time: 12:00:00 Run this script: var res = new GlideRecord('u_reservation'); res.addQuery('u_departure', gs.now()); res.addNotNullQuery('u_room'); res.query(); while (res.next()) { gs.eventQueue('room.reservation_end', res.u_room.getRefRecord()); } Remember to enclose scripts in a function if they could cause other scripts to run. Most often, this is when records are updated, but it is not the case here. Our reliable friend, GlideRecord, is employed to get reservation records. The first filter ensures that only reservations that are ending today will be returned, while the second filter ignores reservations that don't have a room. Once the database has been queried, the records are looped round. For each one, the eventQueue function of GlideSystem is used to add in an event into the event queue. The record that is being passed into the event queue is actually the Room record. The getRefRecord function of GlideElement dot-walks through a reference field and returns a newly initialized GlideRecord object rather than more GlideElement objects. Once the Scheduled Job has been saved, it'll generate the events at midday. But for testing, there is a handy Execute Now UI action. Ensure there is test data that fits the code and click on the button. Navigate to System Policy > Events > Event Log to see the entries. There is a Conditional checkbox with a separate Condition script field. However, I don't often use this; instead, I provide any conditions inline in the script that I'm writing, just like we did here. For anything more than a few lines, a Script Include should be used for modularity and efficiency. Running scripts on events The ServiceNow platform has several items that listen for events. Email Notifications are one, which we'll explore soon. Another is Script Actions. Script Actions is server-side code that is associated with a table and runs against a record, just like a Business Rule. But instead of being triggered by a database action, a Script Action is started with an event. There are many similarities between a Script Action and an asynchronous Business Rule. They both run server-side, asynchronous code. Unless there is a particular reason, stick to Business Rules for ease and familiarity. Just like a Business Rule, the GlideRecord variable called current is available. This is the same record that was passed into the second parameter when gs.eventQueue was called. Additionally, another GlideRecord variable called event is provided. It is initialized against the appropriate Event record on the sysevent table. This gives you access to the other parameters (event.param1 and event.param2) as well as who created the event, when, and more. Creating tasks automatically When creating a Script Action, the first step is to register or identify the event it will be associated with. Create another entry in Event Registry. Event name: room.reservation_end Table: Room [u_room] In order to make the functionality more data driven, let's create another template. Either navigate to System Definition > Templates or create a new Maintenance task and use the Save as Template option in the context menu. Regardless, set the following fields: Name: End of reservation room cleaning Table: Maintenance [u_maintenance] Template: Assignment group: Housekeeping Short description: End of reservation room cleaning Description: Please perform the standard cleaning for the room listed above. To create the Script Action, go to System Policy > Events > Script Actions and use the following details: Name: Produce maintenance tasks Event name: room.reservation_end Active: <ticked> Script: var tsk = new GlideRecord('u_maintenance'); tsk.newRecord(); tsk.u_room = current.sys_id; tsk.applyTemplate('End of reservation room cleaning'); tsk.insert(); This script is quite straightforward. It creates a new GlideRecord object that represents a record in the Maintenance table. The fields are initialized through newRecord, and the Room field is populated with the sys_id of current—which is the Room record that the event is associated with. The applyTemplate function is given the name of the template. It would be better to use a property here instead of hardcoding a template name. Now, the following items should occur every day: At midday, a Scheduled Job looks for any reservations that are ending today For each one, the room.reservation_end event is fired A Script Action will be called, which creates a new Maintenance task The Maintenance task is assigned, through a template, to the Housekeeping group. But how does Housekeeping know that this task has been created? Let's send them an e-mail! Sending e-mail notifications E-mail is ubiquitous. It is often the primary form of communication in business, so it is important that ServiceNow has good support. It is easy to configure ServiceNow to send out communications to whoever needs to know. There are a few general use cases for e-mail notifications: Action: Asking the receiver to do some work Informational: Giving the receiver an update or some data Approval: Asking for a decision While this is similar enough to an action e-mail, it is a common enough scenario to make it independent. We'll work through these scenarios in order to understand how ServiceNow can help. There are obviously a lot more ways you can use e-mails. One of them is for a machine-to-machine integration, such as e-bonding. It is possible to do this in ServiceNow, but it is not the best solution. Setting e-mail properties A ServiceNow instance uses standard protocols to send and receive e-mail. E-mails are sent by connecting to an SMTP server with a username and password, just like Outlook or any other e-mail client. When an instance is provisioned, it also gets an e-mail account. If your instance is available at instance.service-now.com through the Web, it has an e-mail address of instance@service-now.com. This e-mail account is not unusual. It is accessible via POP to receive mail, and uses SMTP to send it. Indeed, any standard e-mail account can be used with an instance. Navigate to System Properties > Email to investigate the settings. The properties are unusually laid out in two columns, for sending and receiving for the SMTP and POP connections. When you reach the page, the settings will be tested, so you can immediately see if the platform is capable of sending or receiving e-mails. Before you spend time configuring Email Notifications, make sure the basics work! ServiceNow will only use one e-mail account to send out e-mails, and by default, will only check for new e-mails in one account too. Tracking sent e-mails in the Activity Log One important feature of Email Notifications is that they can show up in the Activity Log if configured. This means that all e-mails associated with a ticket are associated and kept together. This is useful when tracking correspondence with a Requester. To configure the Activity Log, navigate to a Maintenance record. Right-click on the field and choose Personalize Activities. At the bottom of the Available list is Sent/Received Emails. Add it to the Selected list and click on Save. Once an e-mail has been sent out, check back to the Activity Formatter to see the results. Assigning work Our Housekeeping team is equipped with the most modern technology. Not only are they users of ServiceNow, but they have mobile phones that will send and receive e-mails. They have better things to do than constantly refresh the web interface, so let's ensure that ServiceNow will come to them. One of the most common e-mail notifications is for ServiceNow to inform people when they have been assigned a task. It usually gives an overview and a link to view more details. This e-mail tells them that something needs to happen and that ServiceNow should be updated with the result. Sending an e-mail notification on assignment When our Maintenance tasks have the Assignment group field populated, we need the appropriate team members to be aware. We are going to achieve this by sending an e-mail to everyone in that group. At Gardiner Hotels, we empower our staff: they know that one member of the team should pick the task up and own it by setting the Assigned to field to themselves and then get it done. Navigate to System Policy > Email > Notifications. You will see several examples that are useful to understand the basic configuration, but we'll create our own. Click on New. The Email Notifications form is split into three main sections: When to send, Who will receive, and What it will contain. Some options are hidden in a different view, so click on Advanced view to see them all. Start off by giving the basic details: Name: Group assignment Table: Maintenance [u_maintenance] Now, let's see each of the sections of Email Notifications form in detail, in the following sections. When to send This section gives you a choice of either using an event to determine which record should be worked with or for the e-mail notification system to monitor the table directly. Either way, Conditions and Advanced conditions lets you provide a filter or a script to ensure you only send e-mails at the right time. If you are using an event, the event must be fired and the condition fields satisfied for the e-mail to be sent. The Weight field is often overlooked. A single event or record update may satisfy the condition of multiple Email Notifications. For example, a common scenario is to send an e-mail to the Assignment group when it is populated and to send an e-mail to the Assigned to person when that is populated. But what if they both happen at the same time? You probably don't want the Assignment group being told to pick up a task if it has already been assigned. One way is to give the Assignment group e-mail a higher weight: if two e-mails are being generated, only the lower weight will be sent. The other will be marked as skipped. Another way to achieve this scenario is through conditions. Only send the Assignment group e-mail if the Assigned to field is empty. Since we've already created an event, let's use it. And because of the careful use of conditions in the Business Rule, it only sends out the event in the appropriate circumstances. That means no condition is necessary in this Email Notification. Send when: Event is fired Event name: maintenance.assigned Who will receive Once we've determined when an e-mail should be sent, we need to know who it will go to. The majority of the time, it'll be driven by data on the record. This scenario is exactly that: the people who will receive the e-mail are those in the Assignment group field on the Maintenance task. Of course, it is possible to hardcode recipients and the system can also deliver e-mails to Users and Groups that have been sent as a parameter when creating the event. Users/Groups in fields: Assignment group You can also use scripts to specify the From, To, CC, and BCC of an e-mail. The wiki here contains more information: http://wiki.servicenow.com/?title=Scripting_for_Email_Notifications Send to event creator When someone comes to me and says: "Martin, I've set up the e-mail notification, but it isn't working. Do you know why?", I like to put money on the reason. I very often win, and you can too. Just answer: "Ensure Send to event creator is ticked and try again". The Send to event creator field is only visible on the Advanced view, but is the cause of this problem. So tick Send to event creator. Make sure this field is ticked, at least for now. If you do not, when you test your e-mail notifications, you will not receive your e-mail. Why? By default, the system will not send confirmation e-mails. If you were the person to update a record and it causes e-mails to be sent, and it turns out that you are one of the recipients, it'll go to everyone other than you. The reasoning is straightforward: you carried out the action so why do you need to be informed that it happened? This cuts down on unnecessary e-mails and so is a good thing. But it confuses everyone who first comes across it. If there is one tip I can give to you in this article, it is this – tick the Send to event creator field when testing e-mails. Better still, test realistically! What it will contain The last section is probably the simplest to understand, but the one that takes most time: deciding what to send. The standard view contains just a few fields: a space to enter your message, a subject line, and an SMS alternate field that is used for text messages. Additionally, there is an Email template field that isn't often used but is useful if you want to deliver the same content in multiple e-mail messages. View them by navigating to System Policy > Email > Templates. These fields all support variable substitution. This is a special syntax that instructs the instance to insert data from the record that the e-mail is triggered for. This Maintenance e-mail can easily contain data from the Maintenance record. This lets you create data-driven e-mails. I like to compare it to a mail-merge system; you have some fixed text, some placeholders, and some data, and the platform puts them all together to produce a personalized e-mail. By default, the message will be delivered as HTML. This means you can make your messages look more styled by using image tags and font controls, among other options. Using variable substitution The format for substitution is ${variable}. All of the fields on the record are available as variables, so to include the Short description field in an e-mail, use ${short_description}. Additionally, you can dot-walk. So by having ${assigned_to.email} in the message, you insert the e-mail address of the user that the task is assigned to. Populate the fields with the following information and save: Subject: Maintenance task assigned to your group Message HTML: Hello ${assignment_group}. Maintenance task ${number} has been assigned to your group, for room: ${u_room}. Description: ${description} Please assign to a team member here: ${URI} Thanks! To make this easier, there is a Select variables section on the Message HTML and SMS alternate fields that will create the syntax in a single click. But don't forget that variable substitution is available for the Subject field too. In addition to adding the value of fields, variable substitution like the following ones also makes it easy to add HTML links. ${<reference field>.URI} will create an HTML link to the reference field, with the text LINK ${<reference field>.URI_REF} will create an HTML link, but with the display value of the record as the text Linking to CMS sites is possible through ${CMS_URI+<site>/<page>} Running scripts in e-mail messages If the variables aren't giving you enough control, like everywhere else in ServiceNow, you can add a script. To do so, create a new entry in the Email Scripts [sys_script_email] table, which is available under System Policy > Email > Notification Email Scripts. Typical server-side capability is present, including the current GlideRecord variable. To output text, use the print function of the template object. For example: template.print('Hello, world!'); Like a Script Include, the Name field is important. Call the script by placing ${mail_script:<name>} in the Message HTML field in the e-mail. An object called email is also available. This gives much more control with the resulting e-mail, giving functions such as setImportance, addAddress, and setReplyTo. This wiki has more details: http://wiki.servicenow.com/?title=Scripting_for_Email_Notifications. Controlling the watermark Every outbound mail contains a reference number embedded into the body of the message, in the format Ref:MSG0000100. This is very important for the inbound processing of e-mails, as discussed in a later section. Some options are available to hide or remove the watermark, but this may affect how the platform treats a reply. Navigating to System Mailboxes > Administration > Watermarks shows a full list of every watermark and the associated record and e-mail. Including attachments and other options There are several other options to control how an e-mail is processed: Include Attachments: It will copy any attachments from the record into the e-mail. There is no selection available: it simply duplicates each one every time. You probably wouldn't want this option ticked on many e-mails, since otherwise you will fill up the recipient's inboxes quickly! The attach_links Email Script is a good alternative—it gives HTML links that will let an interested recipient download the file from the instance. Importance: This allows a Low or High priority flag to be set on an e-mail From and Reply-To fields: They'll let you configure who the e-mail purports to be from, on a per–e-mail basis. It is important to realize that this is e-mail spoofing: while the e-mail protocols accept this, it is often used by spam to forge a false address. Sending informational updates Many people rely on e-mails to know what is going on. In addition to telling users when they need to do work, ServiceNow can keep everyone informed as to the current situation. This often takes the form of one of these scenarios: Automatic e-mails, often based on a change of the State field Completely freeform text, with or without a template A combination of the preceding two: a textual update given by a person, but in a structured template Sending a custom e-mail Sometimes, you need to send an e-mail that doesn't fit into a template. Perhaps you need to attach a file, copy in additional people, or want more control over formatting. In many cases, you would turn to the e-mail client on your desktop, such as Outlook or perhaps even Lotus Notes. But the big disadvantage is that the association between the e-mail and the record is lost. Of course, you could save the e-mail and upload it as an attachment, but that isn't as good as it being part of the audit history. ServiceNow comes with a basic e-mail client built in. In fact, it is just shortcutting the process. When you use the e-mail client, you are doing exactly the same as the Email Notifications engine would, by generating an entry in the sys_email table. Enabling the e-mail client The Email Client is accessed by a little icon in the form header of a record. In order to show it, a property must be set in the Dictionary Entry of the table. Navigate to System Definition > Dictionary and find the entry for the u_maintenance table that does not have an entry in the Column name field. The value for the filter is Table - is - u_maintenance and Column name - is – empty. Click on Advanced view. Ensure the Attributes field contains email_client. Navigate to an existing Maintenance record, and next to the attachments icon is the envelope icon. Click on it to open the e-mail client window. The Email Client is a simple window, and the fields should be obvious. Simply fill them out and click on Send to deliver the mail. You may have noticed that some of the fields were prepopulated. You can control what each field initially contains by creating an Email Client Template. Navigate to System Policy > Email > Client Templates, click on New, and save a template for the appropriate table. You can use the variable substitution syntax to place the contents of fields in the e-mail. There is a Conditions field you can add to the form to have the right template used. Quick Messages are a way to let the e-mail user populate Message Text, similar to a record template. Navigate to System Policy > Email > Quick Messages and define some text. These are then available in a dropdown selection field at the top of the e-mail client. The e-mail client is often seized upon by customers who send a lot of e-mail. However, it is a simple solution and does not have a whole host of functionality that is often expected. I've found that this gap can be frustrating. For example, there isn't an easy way to include attachments from the parent record. Instead, often a more automated way to send custom text is useful. Sending e-mails with Additional comments and Work notes The journal fields on the task table are useful enough, allowing you to record results that are then displayed on the Activity log in a who, what, when fashion. But sending out the contents via e-mail makes them especially helpful. This lets you combine two actions in one: documenting information against the ticket and also giving an update to interested parties. The Task table has two fields that let you specify who those people are: the Watch list and the Work notes list. An e-mail notification can then use this information in a structured manner to send out the work note. It can include the contents of the work notes as well as images, styled text, and background information. Sending out Work notes The Work notes field should already be on the Maintenance form. Use Form Design to include the Work notes list field too, placing it somewhere appropriate, such as underneath the Assignment group field. Both the Watch list and the Work notes list are List fields (often referred to as Glide Lists). These are reference fields that contain more than one sys_id from the sys_user table. This makes it is easy to add a requester or fulfiller who is interested in updates to the ticket. What is special about List fields is that although they point towards the sys_user table and store sys_id references, they also store e-mail addresses in the same database field. The e-mail notification system knows all about this. It will run through the following logic: If it is a sys_id, the user record is looked up. The e-mail address in the user record is used. If it is an e-mail address, the user record is searched for. If one is found, any notification settings they have are respected. A user may turn off e-mails, for example, by setting the Notification field to Disabled in their user record. If a user record is not found, the e-mail is sent directly to the e-mail address. Now create a new Email Notification and fill out the following fields: Name: Work notes update Table: Maintenance [u_maintenance] Inserted: <ticked> Updated: <ticked> Conditions: Work notes - changes Users/Groups in fields: Work notes list Subject: New work notes update on ${number} Send to event creator: <ticked> Message: ${number} - ${short_description} has a new work note added.   ${work_notes} This simple message would normally be expanded and made to fit into the corporate style guidelines—use appropriate colors and styles. By default, the last three entries in the Work notes field would be included. If this wasn't appropriate, the global property could be updated or a mail script could use getJournalEntry(1) to grab the last one. Refer to this wiki article for more information: http://wiki.servicenow.com/?title=Using_Journal_Fields#Restrict_the_Number_of_Entries_Sent_in_a_Notification. To test, add an e-mail address or a user into the Work notes list, enter something into the Work notes field, and save. Don't forget about Send to event creator! This is a typical example of how, normally, the person doing the action wouldn't need to receive the e-mail update, since they were the one doing it. But set it so it'll work with your own updates. Approving via e-mail Graphical Workflow generates records that someone will need to evaluate and make a decision on. Most often, approvers will want to receive an e-mail notification to alert them to the situation. There are two approaches to sending out an e-mail when an approval is needed. An e-mail is associated with a particular record; and with approvals, there are two records to choose from: The Approval record, asking for your decision. The response will be processed by the Graphical Workflow. The system will send out one e-mail to each person that is requested to approve it. The Task record that generated the Approval request. The system will send out one e-mail in total. Attaching notifications to the task is sometimes helpful, since it gives you access to all the fields on the record without dot-walking. This section deals with how the Approval record itself uses e-mail notifications. Using the Approval table An e-mail that is sent out from the Approval table often contains the same elements: Some text describing what needs approving: perhaps the Short description or Priority. This is often achieved by dot-walking to the data through the Approval for reference field. A link to view the task that needs approval. A link to the approval record. Two mailto links that allow the user to approve or reject through e-mail. This style is captured in the Email Template named change.itil.approve.role and is used in an Email Notification called Approval Request that is against the Approval [sys_approver] table. The mailto links are generated through a special syntax: ${mailto:mailto.approval} and ${mailto:mailto.rejection}. These actually refer to Email Templates themselves (navigate to System Policy > Email > Templates and find the template called mailto.approval). Altogether, these generate HTML code in the e-mail message that looks something like this: <a href="mailto:<instance>@service-now.com.com?subject=Re:MAI0001001 - approve&body=Ref:MSG0000001">Click here to approve MAI0001001</a> Normally, this URL would be encoded, but I've removed the characters for clarity. When this link is clicked on in the receiver's e-mail client, it creates a new e-mail message addressed to the instance, with Re:MAI0001001 - approve in the subject line and Ref:MSG0000001 in the body. If this e-mail was sent, the instance would process it and approve the approval record. A later section, on processing inbound e-mails, shows in detail how this happens. Testing the default approval e-mail In the baseline system, there is an Email Notification called Approval Request. It is sent when an approval event is fired, which happens in a Business Rule on the Approval table. It uses the e-mail template mentioned earlier, giving the recipient information and an opportunity to approve it either in their web browser, or using their e-mail client. If Howard Johnson was set as the manager of the Maintenance group, he will be receiving any approval requests generated when the Send to External button is clicked on. Try changing the e-mail address in Howard's user account to your own, but ensure the Notification field is set to Enable. Then try creating some approval requests. Specifying Notification Preferences Every user that has access to the standard web interface can configure their own e-mail preferences through the Subscription Notification functionality. Navigate to Self-Service > My profile and click on Notification Preferences to explore what is available. It represents the Notification Messages [cmn_notif_message] table in a straightforward user interface. The Notification Preferences screen shows all the notifications that the user has received, such as the Approval Request and Work notes update configured earlier. They are organized by device. By default, every user has a primary e-mail device. To never receive a notification again, just choose the Off selection and save. This is useful if you are bombarded by e-mails and would rather use the web interface to see updates! If you want to ensure a user cannot unsubscribe, check the Mandatory field in the Email Notification definition record. You may need to add it to the form. This disables the choice, as per the Work notes update notification in the screenshot. Subscribing to Email Notifications The Email Notifications table has a field labeled Subscribable. If this is checked, then users can choose to receive a message every time the Email Notification record's conditions are met. This offers a different way of working: someone can decide if they want more information, rather than the administrator deciding. Edit the Work notes update Email Notification. Switch to the Advanced view, and using Form Design, add the Subscribable field to the Who will receive section on the form. Now make the following changes. Once done, use Insert and Stay to make a copy.     Name: Work notes update (Subscribable)     Users/Groups in fields: <blank>     Subscribable: <ticked> Go to Notification Preferences and click on To subscribe to a new notification click here. The new notification can be selected from the list. Now, every time a Work note is added to any Maintenance record, a notification will be sent to the subscriber. It is important to clear Users/Groups in field if Subscribable is ticked. Otherwise, everyone in the Work notes list will then become subscribed and receive every single subsequent notification for every record! The user can also choose to only receive a subset of the messages. The Schedule field lets them choose when to receive notifications: perhaps only during working hours. The filter lets you define conditions, such as only receiving notifications for important issues. In this instance, a Notification Filter could be created for the Maintenance table, based upon the Priority field. Then, only Work notes for high-priority Maintenance tasks would be sent out. Creating a new device The Notification Devices [cmn_notif_device] table stores e-mail addresses for users. It allows every user to have multiple e-mail addresses, or even register mobile phones for text messages. When a User record is created, a Business Rule named Create primary email device inserts a record in the Notification Devices table. The value in the Email field on the User table is just copied to this table by another Business Rule named Update Email Devices. A new device can be added from the Notification Preferences page, or a Related List can be added to the User form. Navigate to User Administration > Users and create a new user. Once saved, you should receive a message saying Primary email device created for user (the username is displayed in place of user). Then add the Notification Device > User Related List to the form where the e-mail address record should be visible. Click on New. The Notification Device form allows you to enter the details of your e-mail- or SMS-capable device. Service provider is a reference field to the Notification Service Provider table, which specifies how an SMS message should be sent. If you have an account with one of the providers listed, enter your details. There are many hundreds of inactive providers in the Notification Service Provider [cmn_notif_service_provider] table. You may want to try enabling some, though many do not work for the reasons discussed soon. Once a device has been added, they can be set up to receive messages through Notification Preferences. For example, a user can choose to receive approval requests via a text message by adding the Approval Request Notification Message and associating their SMS device. Alternatively, they could have two e-mail addresses, with one for an assistant. If a Notification is sent to a SMS device, the contents of the SMS alternate field are used. Remember that a text message can only be 160 characters at maximum. The Notification Device table has a field called Primary Email. This determines which device is used for a notification that has not been sent to this user before. Despite the name, Primary Email can be ticked for an SMS device. Sending text messages Many mobile phone networks in the US supply e-mail-to-SMS gateways. AT&T gives every subscriber an e-mail address in the form of 5551234567@txt.att.net. This allows the ServiceNow instance to actually send an e-mail and have the gateway convert it into an SMS. The Notification Service Provider form gives several options to construct the appropriate e-mail address. In this scheme, the recipient pays for the text message, so the sending of text messages is free. Many European providers do not provide such functionality, since the sender is responsible for paying. Therefore, it is more common to use the Web to deliver the message to the gateway: perhaps using REST or SOAP. This gives an authenticated method of communication, which allows charging. The Notifications Service Provider table also provides an Advanced notification checkbox that enables a script field. The code is run whenever the instance needs to send out an e-mail. This is a great place to call a Script Include that does the actual work, providing it with the appropriate parameters. Some global variables are present: email.SMSText contains the SMS alternate text and device is the GlideRecord of the Notification Device. This means device.phone_number and device.user are very useful values to access. Delivering an e-mail There are a great many steps that the instance goes through to send an e-mail. Some may be skipped or delivered as a shortcut, depending on the situation, but there are usually a great many steps that are processed. An e-mail may not be sent if any one of these steps goes wrong! A record is updated: Most notifications are triggered when a task changes state or a comment is added. Use debugging techniques to determine what is changing. These next two steps may not be used if the Notification does not use events. An event is fired: A Business Rule may fire an event. Look under System Policy > Events > Event Log to see if it was fired. The event is processed: A Scheduled Job will process each event in turn. Look in the Event Log and ensure that all events have their state changed to Processed. An Email Notification is processed: The event is associated with an Email Notification or the Email Notification uses the Inserted and Updated checkboxes to monitor a table directly. Conditions are evaluated: The platform checks the associated record and ensures the conditions are met. If not, no further processing occurs. The receivers are evaluated: The recipients are determined from the logic in the Email Notification. The use of Send to event creator makes a big impact on this step. The Notification Device is determined: The Notification Messages table is queried. The appropriate Notification Device is then found. If the Notification Device is set to inactive, the recipient is dropped. The Notification field on the User record will control the Active flag of the Notification Devices. Any Notification Device filters are applied: Any further conditions set in the Notification Preferences interface are evaluated, such as Schedule and Filter. An e-mail record is generated: Variable substitution takes place on the Message Text and a record is saved into the sys_email table, with details of the messages in the Outbox. The Email Client starts at this point. The weight is evaluated: If an Email Notification with a lower weight has already been generated for the same event, the e-mail has the Mailbox field set to Skipped. The email is sent: The SMTP Sender Scheduled Job runs every minute. It picks up all messages in the Outbox, generates the message ID, and connects to the SMTP server specified in Email properties. This only occurs if Mail sending is enabled in the properties. Errors will be visible under System Mailboxes > Outbound > Failed. The generated e-mails can be monitored in the System Mailboxes Application Menu, or through System Logs > Emails. They are categorized into Mailboxes, just like an e-mail client. This should be considered a backend table, though some customers who want more control over e-mail notifications make this more accessible. Knowing who the e-mail is from ServiceNow uses one account when sending e-mails. This account is usually the one provided by ServiceNow, but it can be anything that supports SMTP: Exchange, Sendmail, NetMail, or even Gmail. The SMTP protocol lets the sender specify who the mail is from. By default, no checks are done to ensure that the sender is allowed to send from that address. Every e-mail client lets you specify who the e-mail address is from, so I could change the settings in Outlook to say my e-mail address is president@whitehouse.gov or primeminister@number10.gov.uk. Spammers and virus writers have taken advantage of this situation to fill our mailboxes with unwanted e-mails. Therefore, e-mail systems are doing more authentication and checking of addresses when the message is received. You may have seen some e-mails from your client saying an e-mail has been delivered on behalf of another when this validation fails, or it even falling into the spam directly. ServiceNow uses SPF to specify which IP addresses can deliver service-now.com e-mails. Spam filters often use this to check if a sender is authorized. If you spoof the e-mail address, you may need to make an exception for ServiceNow. Read up more about it at: http://en.wikipedia.org/wiki/Sender_Policy_Framework. You may want to change the e-mail addresses on the instance to be your corporate domain. That means that your ServiceNow instance will send the message but will pretend that it is coming from another source. This runs the real risk of the e-mails being marked as spam. Instead, think about only changing the From display (not the e-mail address) or use your own e-mail account. Receiving e-mails Many systems can send e-mails. But isn't it annoying when they are broadcast only? When I get sent a message, I want to be able to reply to it. E-mail should be a conversation, not a fire-and-forget distribution mechanism. So what happens when you reply to a ServiceNow e-mail? It gets categorized, and then processed according to the settings in Inbound Email Actions. Lots of information is available on the wiki: http://wiki.servicenow.com/?title=Inbound_Email_Actions. Determining what an inbound e-mail is Every two minutes, the platform runs the POP Reader scheduled job. It connects to the e-mail account specified in the properties and pulls them all into the Email table, setting the Mailbox to be Inbox. Despite the name, the POP Reader job also supports IMAP accounts. This fires an event called email.read, which in turn starts the classification of the e-mail. It uses a series of logic decisions to determine how it should respond. The concept is that an inbound e-mail can be a reply to something that the platform has already sent out, is an e-mail that someone forwarded, or is part of an e-mail chain that the platform has not seen before; that is, it is a new e-mail. Each of these are handled differently, with different assumptions. As the first step in processing the e-mail, the platform attempts to find the sender in the User table. It takes the address that the e-mail was sent from as the key to search for. If it cannot find a User, it either creates a new User record (if the property is set), or uses the Guest account. Should this e-mail be processed at all? If either of the following conditions match, then the e-mail has the Mailbox set to skipped and no further processing takes place:     Does the subject line start with recognized text such as "out of office autoreply"?     Is the User account locked out? Is this a forward? Both of the following conditions must match, else the e-mail will be checked as a reply:     Does the subject line start with a recognized prefix (such as FW)?     Does the string "From" appear anywhere in the body? Is this a reply? One of the following conditions must match, else the e-mail will be processed as new:     Is there a valid, appropriate watermark that matches an existing record?     Is there an In-Reply-To header in the e-mail that references an e-mail sent by the instance?     Does the subject line start with a recognized prefix (such as RE) and contain a number prefix (such as MAI000100)? If none of these are affirmative, the e-mail is treated as a new e-mail. The prefixes and recognized text are controlled with properties available under System Properties > Email. This order of processing and logic cannot be changed. It is hardcoded into the platform. However, clever manipulation of the properties and prefixes allows great control over what will happen. One common request is to treat forwarded e-mails just like replies. To accomplish this, a nonsensical string should be added into the forward_subject_prefix, and the standard values added to the reply_subject prefix. property. For example, the following values could be used: Forward prefix: xxxxxxxxxxx Reply prefix: re:, aw:, r:, fw:, fwd:… This will ensure that a match with the forwarding prefixes is very unlikely, while the reply logic checks will be met. Creating Inbound Email Actions Once an e-mail has been categorized, it will run through the appropriate Inbound Email Action. The main purpose of an Inbound Email Action is to run JavaScript code that manipulates a target record in some way. The target record depends upon what the e-mail has been classified as: A forwarded or new e-mail will create a new record A reply will update an existing record Every Inbound Email Action is associated with a table and a condition, just like Business Rules. Since a reply must be associated with an existing record (usually found using the watermark), the platform will only look for Inbound Email Actions that are against the same table. The platform initializes the GlideRecord object current as the existing record. An e-mail classified as Reply must have an associated record, found via the watermark, the In-Reply-To header, or by running a search for a prefix stored in the sys_number table, or else it will not proceed. Forwarded and new e-mails will create new records. They will use the first Inbound Email Action that meets the condition, regardless of the table. It will then initialize a new GlideRecord object called current, expecting it to be inserted into the table. Accessing the e-mail information In order to make the scripting easier, the platform parses the e-mail and populates the properties of an object called email. Some of the more helpful properties are listed here: email.to is a comma-separated list of e-mail addresses that the e-mail was sent to and was CC'ed to. email.body_text contains the full text of the e-mail, but does not include the previous entries in the e-mail message chain. This behavior is controlled by a property. For example, anything that appears underneath two empty lines plus -----Original Message----- is ignored. email.subject is the subject line of the e-mail. email.from contains the e-mail address of the User record that the platform thinks sent the e-mail. email.origemail uses the e-mail headers to get the e-mail address of the original sender. email.body contains the body of the e-mail, separated into name:value pairs. For instance, if a line of the body was hello:world, it would be equivalent to email.body.hello = 'world'. Approving e-mails using Inbound Email Actions The previous section looked at how the platform can generate mailto links, ready for a user to select. They generate an e-mail that has the word approve or reject in the subject line and watermark in the body. This is a great example of how e-mail can be used to automate steps in ServiceNow. Approving via e-mail is often much quicker than logging in to the instance, especially if you are working remotely and are on the road. It means approvals happen faster, which in turn provides better service to the requesters and reduces the effort for our approvers. Win win! The Update Approval Request Inbound Email Action uses the information in the inbound e-mail to update the Approval record appropriately. Navigate to System Policy > Email > Inbound Actions to see what it does. We'll inspect a few lines of the code to get a feel for what is possible when automating actions with incoming e-mails. Understanding the code in Update Approval Request One of the first steps within the function, validUser, performs a check to ensure the sender is allowed to update this Approval. They must either be a delegate or the user themselves. Some companies prefer to use an e-Signature method to perform approval, where a password must be entered. This check is not up to that level, but does go some way to helping. E-mail addresses (and From strings) can be spoofed in an e-mail client. Assuming the validation is passed, the Comments field of the Approval record is updated with the body of the e-mail. current.comments = "reply from: " + email.from + "nn" + email.body_text; In order to set the State field, and thus make the decision on the Approval request, the script simply runs a search for the existence of approve or reject within the subject line of the e-mail using the standard indexOf string function. If it is found, the state is set. if (email.subject.indexOf("approve") >= 0) current.state = "approved"; if (email.subject.indexOf("reject") >= 0) current.state = "rejected"; Once the fields have been updated, it saves the record. This triggers the standard Business Rules and will run the Workflow as though this was done in the web interface. Updating the Work notes of a Maintenance task Most often, a reply to an e-mail is to add Additional comments or Work notes to a task. Using scripting, you could differentiate between the two scenarios by seeing who has sent the e-mail: a requester would provide Additional comments and a fulfiller may give either, but it is safer to assume Work notes. Let's make a simple Inbound Email Action to process e-mails and populate the Work notes field. Navigate to System Policy > Email > Inbound Actions and click on New. Use these details: Name: Work notes for Maintenance task Target table: Maintenance [u_maintenance] Active: <ticked> Type: Reply Script: current.work_notes = "Reply from: " + email.origemail + "nn" + email.body_text; current.update(); This script is very simple: it just updates our task record after setting the Work notes field with the e-mail address of the sender and the text they sent. It is separated out with a few new lines. The platform impersonates the sender, so the Activity Log will show the update as though it was done in the web interface. Once the record has been saved, the Business Rules run as normal. This includes ServiceNow sending out e-mails. Anyone who is in the Work notes list will receive the e-mail. If Send to event creator is ticked, it means the person who sent the e-mail may receive another in return, telling them they updated the task! Having multiple incoming e-mail addresses Many customers want to have logic based upon inbound e-mail addresses. For example, sending a new e-mail to invoices@gardiner-hotels.com would create a task for the Finance team, while wifi@gardiner-hotels.com creates a ticket for the Networking group. These are easy to remember and work with, and implementing ServiceNow should not mean that this simplicity should be removed. ServiceNow provides a single e-mail account that is in the format instance@service-now.com and is not able to provide multiple or custom e-mail addresses. There are two broad options for meeting this requirement: Checking multiple accounts Redirecting e-mails Using the Email Accounts plugin While ServiceNow only provides a single e-mail address, it has the ability to pull in e-mails from multiple e-mail accounts through the Email Accounts plugin. The wiki has more information here: http://wiki.servicenow.com/?title=Email_Accounts. Once the plugin has been activated, it converts the standard account information into a new Email Account [sys_email_account] record. There can be multiple Email Accounts for a particular instance, and the POP Reader job is repurposed to check each one. Once the e-mails have been brought into ServiceNow, they are treated as normal. Since ServiceNow does not provide multiple e-mail accounts, it is the customer's responsibility to create, maintain, and configure the instance with the details, including the username and passwords. The instance will need to connect to the e-mail account, which is often hosted within the customer's datacenter. This means that firewall rules or other security methods may need to be considered. Redirecting e-mails Instead of having the instance check multiple e-mail accounts, it is often preferable to continue to work with a single e-mail address. The additional e-mail addresses can be redirected to the one that ServiceNow provides. The majority of e-mail platforms, such as Microsoft Exchange, make it possible to redirect e-mail accounts. When an e-mail is received by the e-mail system, it is resent to the ServiceNow account. This process differs from e-mail forwarding: Forwarding involves adding the FW: prefix to the subject line, altering the message body, and changing the From address. Redirection sends the message unaltered, with the original To address, to the new address. There is little indication that the message has not come directly from the original sender. Redirection is often an easier method to work with than having multiple e-mail accounts. It gives more flexibility to the customer's IT team, since they do not need to provide account details to the instance, and enables them to change the redirection details easily. If a new e-mail address has to be added or an existing one decommissioned, only the e-mail platform needs to be involved. It also reduces the configuration on the ServiceNow instance; nothing needs to change. Processing multiple e-mail address Once the e-mails have been brought into ServiceNow, the platform will need to examine who the e-mail was sent to and make some decisions. This will allow the e-mails sent to wifi@gardiner-hotels.com to be routed as tasks to the networking team. There are several methods available for achieving this: A look-up table can be created, containing a list of e-mail addresses and a matching Group reference. The Inbound Email Script would use a GlideRecord query to find the right entry and populate the Assignment group on the new task. The e-mail address could be copied over into a new field on the task. Standard routing techniques, such as Assignment Rules and Data Lookup, could be used to examine the new field and populate the Assignment group. The Inbound Email Action could contain the addresses hardcoded in the script. While this is not a scalable or maintainable solution, it may be appropriate for a simple deployment. Recording Metrics ServiceNow provides several ways to monitor the progress of a task. These are often reported and e-mailed to the stakeholders, thus providing insight into the effectiveness of processes. Metrics are a way to record information. It allows the analysis and improvement of a process by measuring statistics, based upon particular defined criteria. Most often, these are time based. One of the most common metrics is how long it takes to complete a task: from when the record was created to the moment the Active flag became false. The duration can then be averaged out and compared over time, helping to answer questions such as Are we getting quicker at completing tasks? Metrics provide a great alternative to creating lots of extra fields and Business Rules on a table. Other metrics are more complex and may involve getting more than one result per task. How long does each Assignment group take to deal with the ticket? How long does an SLA get paused for? How many times does the incident get reassigned? The difference between Metrics and SLAs At first glance, a Metric appears to be very similar to an SLA, since they both record time. However, there are some key differences between Metrics and SLAs: There is no target or aim defined in a Metric. It cannot be breached; the duration is simply recorded. A Metric cannot be paused or made to work to a schedule. There is no Workflow associated with a Metric. In general, a Metric is a more straightforward measurement, designed for collecting statistics rather than being in the forefront when processing a task. Running Metrics Every time the Task table gets updated, the metrics events Business Rule fires an event called metric.update. A Script Action named Metric Update is associated with the event and calls the appropriate Metric Definitions. If you define a metric on a non-task-based table, make sure you fire the metric.update event through a Business Rule. The Metric Definition [metric_definition] table specifies how a metric should be recorded, while the Metric Instance [metric_instance] table records the results. As ever, each Metric Definition is applied to a specific table. The Type field of a Metric Definition refers to two situations: Field value duration is associated with a field on the table. Each time the field changes value, the platform creates a new Metric Instance. The duration for which that value was present is recorded. No code is required, but if some is given, it is used as a condition. Script calculation uses JavaScript to determine what the Metric Instance contains. Scripting a Metric Definition There are several predefined variables available to a Metric Definition: current refers to the GlideRecord under examination and definition is a GlideRecord of the Metric Definition. The MetricInstance Script Include provides some helpful functions, including startDuration and endDuration, but it is really only relevant for time-based metrics. Metrics can be used to calculate many statistics (like the number of times a task is reopened), but code must be written to accomplish this. Monitoring the duration of Maintenance tasks Navigate to Metrics > Definitions and click on New. Set the following fields: Name: Maintenance states Table: Maintenance [u_maintenance] Field: State Timeline: <ticked> Once saved, test it out by changing the State field on a Maintenance record to several different values. Make sure to wait 30 seconds or so between each State change, so that the Scheduled Job has time to fire. Right-click on the Form header and choose Metrics Timeline to visualize the changes in the State field. Adding the Metrics Related List to the Maintenance form will display all the captured data. Another Related List is available on the Maintenance Definition form. Summary This article showed how to deal with all the data collected in ServiceNow. The key to this is the automated processing of information. We started with exploring events. When things happen in ServiceNow, the platform can notice and set a flag for processing later. This keeps the system responsive for the user, while ensuring all the work that needs to get done, does get done. Scheduled Jobs is the background for a variety of functions: scheduled reports, scripts, or even task generation. They run on a periodic basis, such as every day or every hour. They are often used for the automatic closure of tasks if the requester hasn't responded recently. Email Notifications are a critical part of any business application. We explored how e-mails are used to let requesters know when they've got work to do, to give requesters a useful update, or when an approver must make a decision. We even saw how approvers can make that decision using only e-mail. Every user has a great deal of control over how they receive these notifications. The Notification Preferences interface lets them add multiple devices, including mobile phones to receive text messages. The Email Client in ServiceNow gives a simple, straightforward interface to send out e-mails, but the Additional comments and Work notes fields are often better and quicker to use. Every e-mail can include the contents of fields and even the output of scripts. Every two minutes, ServiceNow checks for e-mails sent to its account. If it finds any, the e-mail is categorized into being a reply, forward, or new and runs Inbound Email Actions to update or create new records.
Read more
  • 0
  • 0
  • 14720

article-image-regex-practice
Packt
04 Jun 2015
24 min read
Save for later

Regex in Practice

Packt
04 Jun 2015
24 min read
Knowing Regex's syntax allows you to model text patterns, but sometimes coming up with a good reliable pattern can be more difficult, so taking a look at some actual use cases can really help you learn some common design patterns. So, in this article by Loiane Groner and Gabriel Manricks, coauthors of the book JavaScript Regular Expressions, we will develop a form, and we will explore the following topics: Validating a name Validating e-mails Validating a Twitter username Validating passwords Validating URLs Manipulating text (For more resources related to this topic, see here.) Regular expressions and form validation By far, one of the most common uses for regular expressions on the frontend is for use with user submitted forms, so this is what we will be building. The form we will be building will have all the common fields, such as name, e-mail, website, and so on, but we will also experiment with some text processing besides all the validations. In real-world applications, you usually are not going to implement the parsing and validation code manually. You can create a regular expression and rely on some JavaScript libraries, such as: jQuery validation: Refer to http://jqueryvalidation.org/ Parsely.js: Refer to http://parsleyjs.org/ Even the most popular frameworks support the usage of regular expressions with its native validation engine, such as AngularJS (refer to http://www.ng-newsletter.com/posts/validations.html). Setting up the form This demo will be for a site that allows users to create an online bio, and as such, consists of different types of fields. However, before we get into this (since we won't be building a backend to handle the form), we are going to setup some HTML and JavaScript code to catch the form submission and extract/validate the data entered in it. To keep the code neat, we will create an array with all the validation functions, and a data object where all the final data will be kept. Here is a basic outline of the HTML code for which we begin by adding fields: <!DOCTYPE HTML> <html>    <head>        <title>Personal Bio Demo</title>    </head>    <body>        <form id="main_form">            <input type="submit" value="Process" />        </form>          <script>            // js goes here        </script>    </body> </html> Next, we need to write some JavaScript to catch the form and run through the list of functions that we will be writing. If a function returns false, it means that the verification did not pass and we will stop processing the form. In the event where we get through the entire list of functions and no problems arise, we will log out of the console and data object, which contain all the fields we extracted: <script>    var fns = [];    var data = {};      var form = document.getElementById("main_form");      form.onsubmit = function(e) {      e.preventDefault();          data = {};          for (var i = 0; i < fns.length; i++) {            if (fns[i]() == false) {                return;            }        }          console.log("Verified Data: ", data);    } </script> The JavaScript starts by creating the two variables I mentioned previously, we then pull the form's object from the DOM and set the submit handler. The submit handler begins by preventing a page from actually submitting, (as we don't have any backend code in this example) and then we go through the list of functions running them one by one. Validating fields In this section, we will explore how to validate different types of fields manually, such as name, e-mail, website URL, and so on. Matching a complete name To get our feet wet, let's begin with a simple name field. It's something we have gone through briefly in the past, so it should give you an idea of how our system will work. The following code goes inside the script tags, but only after everything we have written so far: function process_name() {    var field = document.getElementById("name_field");    var name = field.value;      var name_pattern = /^(S+) (S*) ?b(S+)$/;      if (name_pattern.test(name) === false) {        alert("Name field is invalid");         return false;    }      var res = name_pattern.exec(name);    data.first_name = res[1];    data.last_name = res[3];      if (res[2].length > 0) {        data.middle_name = res[2];    }      return true; }   fns.push(process_name); We get the name field in a similar way to how we got the form, then, we extract the value and test it against a pattern to match a full name. If the name doesn't match the pattern, we simply alert the user and return false to let the form handler know that the validations have failed. If the name field is in the correct format, we set the corresponding fields on the data object (remember, the middle name is optional here). The last line just adds this function to the array of functions, so it will be called when the form is submitted. The last thing required to get this working is to add HTML for this form field, so inside the form tags (right before the submit button), you can add this text input: Name: <input type="text" id="name_field" /><br /> Opening this page in your browser, you should be able to test it out by entering different values into the Name box. If you enter a valid name, you should get the data object printed out with the correct parameters, otherwise you should be able to see this alert message: Understanding the complete name Regex Let's go back to the regular expression used to match the name entered by a user: /^(S+) (S*) ?b(S+)$/ The following is a brief explanation of the Regex: The ^ character asserts its position at the beginning of a string The first capturing group (S+) S+ matches a non-white space character [^rntf] The + quantifier between one and unlimited times The second capturing group (S*) S* matches any non-whitespace character [^rntf] The * quantifier between zero and unlimited times " ?" matches the whitespace character The ? quantifier between zero and one time b asserts its position at a (^w|w$|Ww|wW) word boundary The third capturing group (S+) S+ matches a non-whitespace character [^rntf] The + quantifier between one and unlimited times $ asserts its position at the end of a string Matching an e-mail with Regex The next type of field we may want to add is an e-mail field. E-mails may look pretty simple at first glance, but there are a large variety of e-mails out there. You may just think of creating a word@word.word pattern, but the first section can contain many additional characters besides just letters, the domain can be a subdomain, or the suffix could have multiple parts (such as .co.uk for the UK). Our pattern will simply look for a group of characters that are not spaces or instances where the @ symbol has been used in the first section. We will then want an @ symbol, followed by another set of characters that have at least one period, followed by the suffix, which in itself could contain another suffix. So, this can be accomplished in the following manner: /[^s@]+@[^s@.]+.[^s@]+/ The pattern of our example is very simple and will not match every valid e-mail address. There is an official standard for an e-mail address's regular expressions called RFC 5322. For more information, please read http://www.regular-expressions.info/email.html. So, let's add the field to our page: Email: <input type="text" id="email_field" /><br /> We can then add this function to verify it: function process_email() {    var field = document.getElementById("email_field");    var email = field.value;      var email_pattern = /^[^s@]+@[^s@.]+.[^s@]+$/;      if (email_pattern.test(email) === false) {        alert("Email is invalid");        return false;    }      data.email = email;    return true; }   fns.push(process_email); There is an HTML5 field type specifically designed for e-mails, but here we are verifying manually, as this is a Regex book. For more information, please refer to http://www.w3.org/TR/html-markup/input.email.html. Understanding the e-mail Regex Let's go back to the regular expression used to match the name entered by the user: /^[^s@]+@[^s@.]+.[^s@]+$/ Following is a brief explanation of the Regex: ^ asserts a position at the beginning of the string [^s@]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches any white space character [rntf ] @ matches the @ literal character [^s@.]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches a [rntf] whitespace character @. is a single character in the @. list, literally . matches the . character literally [^s@]+ match a single character that is not present in the following list: The + quantifier between one and unlimited times s matches [rntf] a whitespace character @ is the @ literal character $ asserts its position at end of a string Matching a Twitter name The next field we are going to add is a field for a Twitter username. For the unfamiliar, a Twitter username is in the @username format, but when people enter this in, they sometimes include the preceding @ symbol and on other occasions, they only write the username by itself. Obviously, internally we would like everything to be stored uniformly, so we will need to extract the username, regardless of the @ symbol, and then manually prepend it with one, so regardless of whether it was there or not, the end result will look the same. So again, let's add a field for this: Twitter: <input type="text" id="twitter_field" /><br /> Now, let's write the function to handle it: function process_twitter() {    var field = document.getElementById("twitter_field");    var username = field.value;      var twitter_pattern = /^@?(w+)$/;      if (twitter_pattern.test(username) === false) {        alert("Twitter username is invalid");        return false;    }      var res = twitter_pattern.exec(username);    data.twitter = "@" + res[1];    return true; }   fns.push(process_twitter); If a user inputs the @ symbol, it will be ignored, as we will add it manually after checking the username. Understanding the twitter username Regex Let's go back to the regular expression used to match the name entered by the user: /^@?(w+)$/ This is a brief explanation of the Regex: ^ asserts its position at start of the string @? matches the @ character, literally The ? quantifier between zero and one time First capturing group (w+) w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times $ asserts its position at end of a string Matching passwords Another popular field, which can have some unique constraints, is a password field. Now, not every password field is interesting; you may just allow just about anything as a password, as long as the field isn't left blank. However, there are sites where you need to have at least one letter from each case, a number, and at least one other character. Considering all the ways these can be combined, creating a pattern that can validate this could be quite complex. A much better solution for this, and one that allows us to be a bit more verbose with our error messages, is to create four separate patterns and make sure the password matches each of them. For the input, it's almost identical: Password: <input type="password" id="password_field" /><br /> The process_password function is not very different from the previous example as we can see its code as follows: function process_password() {    var field = document.getElementById("password_field");    var password = field.value;      var contains_lowercase = /[a-z]/;    var contains_uppercase = /[A-Z]/;    var contains_number = /[0-9]/;    var contains_other = /[^a-zA-Z0-9]/;      if (contains_lowercase.test(password) === false) {        alert("Password must include a lowercase letter");        return false;    }      if (contains_uppercase.test(password) === false) {        alert("Password must include an uppercase letter");        return false;    }      if (contains_number.test(password) === false) {        alert("Password must include a number");        return false;    }      if (contains_other.test(password) === false) {        alert("Password must include a non-alphanumeric character");        return false;    }      data.password = password;    return true; }   fns.push(process_password); All in all, you may say that this is a pretty basic validation and something we have already covered, but I think it's a great example of working smart as opposed to working hard. Sure, we probably could have created one long pattern that would check everything together, but it would be less clear and less flexible. So, by breaking it into smaller and more manageable validations, we were able to make clear patterns, and at the same time, improve their usability with more helpful alert messages. Matching URLs Next, let's create a field for the user's website; the HTML for this field is: Website: <input type="text" id="website_field" /><br /> A URL can have many different protocols, but for this example, let's restrict it to only http or https links. Next, we have the domain name with an optional subdomain, and we need to end it with a suffix. The suffix itself can be a single word, such as .com or it can have multiple segments, such as.co.uk. All in all, our pattern looks similar to this: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i Here, we are using multiple noncapture groups, both for when sections are optional and for when we want to repeat a segment. You may have also noticed that we are using the case insensitive flag (/i) at the end of the regular expression, as links can be written in lowercase or uppercase. Now, we'll implement the actual function: function process_website() {    var field = document.getElementById("website_field");    var website = field.value;      var pattern = /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i      if (pattern.test(website) === false) {       alert("Website is invalid");        return false;    }      data.website = website;    return true; }   fns.push(process_website); At this point, you should be pretty familiar with the process of adding fields to our form and adding a function to validate them. So, for our remaining examples let's shift our focus a bit from validating inputs to manipulating data. Understanding the URL Regex Let's go back to the regular expression used to match the name entered by the user: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i This is a brief explanation of the Regex: ^ asserts its position at start of a string (?:https?://)? is anon-capturing group The ? quantifier between zero and one time http matches the http characters literally (case-insensitive) s? matches the s character literally (case-insensitive) The ? quantifier between zero and one time : matches the : character literally / matches the / character literally / matches the / character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.w+)? is a non-capturing group The ? quantifier between zero and one time . matches the . character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.[A-Z]{2,3})+ is a non-capturing group The + quantifier between one and unlimited times . matches the . character literally [A-Z]{2,3} matches a single character present in this list The {2,3} quantifier between2 and 3 times A-Z is a single character in the range between A and Z (case insensitive) $ asserts its position at end of a string i modifier: insensitive. Case insensitive letters, meaning it will match a-z and A-Z. Manipulating data We are going to add one more input to our form, which will be for the user's description. In the description, we will parse for things, such as e-mails, and then create both a plain text and HTML version of the user's description. The HTML for this form is pretty straightforward; we will be using a standard textbox and give it an appropriate field: Description: <br /> <textarea id="description_field"></textarea><br /> Next, let's start with the bare scaffold needed to begin processing the form data: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      // More Processing Here      data.html_description = "<p>" + description + "</p>";      return true; }   fns.push(process_description); This code gets the text from the textbox on the page and then saves both a plain text version and an HTML version of it. At this stage, the HTML version is simply the plain text version wrapped between a pair of paragraph tags, but this is what we will be working on now. The first thing I want to do is split between paragraphs, in a text area the user may have different split-ups—lines and paragraphs. For our example, let's say the user just entered a single new line character, then we will add a <br /> tag and if there is more than one character, we will create a new paragraph using the <p> tag. Using the String.replace method We are going to use JavaScript's replace method on the string object This function can accept a Regex pattern as its first parameter, and a function as its second; each time it finds the pattern it will call the function and anything returned by the function will be inserted in place of the matched text. So, for our example, we will be looking for new line characters, and in the function, we will decide if we want to replace the new line with a break line tag or an actual new paragraph, based on how many new line characters it was able to pick up: var line_pattern = /n+/g; description = description.replace(line_pattern, function(match) {    if (match == "n") {        return "<br />";    } else {        return "</p><p>";    } }); The first thing you may notice is that we need to use the g flag in the pattern, so that it will look for all possible matches as opposed to only the first. Besides this, the rest is pretty straightforward. Consider this form: If you take a look at the output from the console of the preceding code, you should get something similar to this: Matching a description field The next thing we need to do is try and extract e-mails from the text and automatically wrap them in a link tag. We have already covered a Regexp pattern to capture e-mails, but we will need to modify it slightly, as our previous pattern expects that an e-mail is the only thing present in the text. In this situation, we are interested in all the e-mails included in a large body of text. If you were simply looking for a word, you would be able to use the b matcher, which matches any boundary (that can be the end of a word/the end of a sentence), so instead of the dollar sign, which we used before to denote the end of a string, we would place the boundary character to denote the end of a word. However, in our case it isn't quite good enough, as there are boundary characters that are valid e-mail characters, for example, the period character is valid. To get around this, we can use the boundary character in conjunction with a lookahead group and say we want it to end with a word boundary, but only if it is followed by a space or end of a sentence/string. This will ensure we aren't cutting off a subdomain or a part of a domain, if there is some invalid information mid-way through the address. Now, we aren't creating something that will try and parse e-mails no matter how they are entered; the point of creating validators and patterns is to force the user to enter something logical. That said, we assume that if the user wrote an e-mail address and then a period, that he/she didn't enter an invalid address, rather, he/she entered an address and then ended a sentence (the period is not part of the address). In our code, we assume that to the end an address, the user is either going to have a space after, such as some kind of punctuation, or that he/she is ending the string/line. We no longer have to deal with lines because we converted them to HTML, but we do have to worry that our pattern doesn't pick up an HTML tag in the process. At the end of this, our pattern will look similar to this: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g We start off with a word boundary, then, we look for the pattern we had before. I added both the (>) greater-than and the (<) less-than characters to the group of disallowed characters, so that it will not pick up any HTML tags. At the end of the pattern, you can see that we want to end on a word boundary, but only if it is followed by a space, an HTML tag, or the end of a string. The complete function, which does all the matching, is as follows: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      var line_pattern = /n+/g;    description = description.replace(line_pattern, function(match) {        if (match == "n") {            return "<br />";        } else {            return "</p><p>";        }    });      var email_pattern = /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g;    description = description.replace(email_pattern, function(match){        return "<a href='mailto:" + match + "'>" + match + "</a>";    });      data.html_description = "<p>" + description + "</p>";      return true; } We can continue to add fields, but I think the point has been understood. You have a pattern that matches what you want, and with the extracted data, you are able to extract and manipulate the data into any format you may need. Understanding the description Regex Let's go back to the regular expression used to match the name entered by the user: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g This is a brief explanation of the Regex: b asserts its position at a (^w|w$|Ww|wW) word boundary [^s<>@]+ matches a single character not present in the this list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ is a single character in the <>@ list (case-sensitive) @ matches the @ character literally [^s<>@.]+ matches a single character not present in this list: The + quantifier between one and unlimited times s matches any [rntf] whitespace character <>@. is a single character in the <>@. list literally (case sensitive) . matches the . character literally [^s<>@]+ matches a single character not present in this the list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ isa single character in the <>@ list literally (case sensitive) b asserts its position at a (^w|w$|Ww|wW) word boundary (?=.?(?:s|<|$)) Positive Lookahead - Assert that the Regex below can be matched .? matches any character (except new line) The ? quantifier between zero and one time (?:s|<|$) is a non-capturing group: First alternative: s matches any white space character [rntf] Second alternative: < matches the character < literally Third alternative: $ assert position at end of the string The g modifier: global match. Returns all matches of the regular expression, not only the first one Explaining a Markdown example More examples of regular expressions can be seen with the popular Markdown syntax (refer to http://en.wikipedia.org/wiki/Markdown). This is a situation where a user is forced to write things in a custom format, although it's still a format, which saves typing and is easier to understand. For example, to create a link in Markdown, you would type something similar to this: [Click Me](http://gabrielmanricks.com) This would then be converted to: <a href="http://gabrielmanricks.com">Click Me</a> Disregarding any validation on the URL itself, this can easily be achieved using this pattern: /[([^]]*)](([^(]*))/g It looks a little complex, because both the square brackets and parenthesis are both special characters that need to be escaped. Basically, what we are saying is that we want an open square bracket, anything up to the closing square bracket, then we want an open parenthesis, and again, anything until the closing parenthesis. A good website to write markdown documents is http://dillinger.io/. Since we wrapped each section into its own capture group, we can write this function: text.replace(/[([^]]*)](([^(]*))/g, function(match, text, link){    return "<a href='" + link + "'>" + text + "</a>"; }); We haven't been using capture groups in our manipulation examples, but if you use them, then the first parameter to the callback is the entire match (similar to the ones we have been working with) and then all the individual groups are passed as subsequent parameters, in the order that they appear in the pattern. Summary In this article, we covered a couple of examples that showed us how to both validate user inputs as well as manipulate them. We also took a look at some common design patterns and saw how it's sometimes better to simplify the problem instead of using brute force in one pattern for the purpose of creating validations. Resources for Article: Further resources on this subject: Getting Started with JSON [article] Function passing [article] YUI Test [article]
Read more
  • 0
  • 0
  • 7004
article-image-introduction-microsoft-azure-cloud-services
Packt
04 Jun 2015
10 min read
Save for later

Introduction to Microsoft Azure Cloud Services

Packt
04 Jun 2015
10 min read
In this article by Gethyn Ellis, author of the book Microsoft Azure IaaS Essentials, we will understand cloud computing and the various services offered by it. (For more resources related to this topic, see here.) Understanding cloud computing What do we mean when we talk about cloud from an information technology perspective? People mention cloud services; where do we get the services from? What services are offered? The Wikipedia definition of cloud computing is as follows: "Cloud computing is a computing term or metaphor that evolved in the late 1990s, based on utility and consumption of computer resources. Cloud computing involves application systems which are executed within the cloud and operated through internet enabled devices. Purely cloud computing does not rely on the use of cloud storage as it will be removed upon users download action. Clouds can be classified as public, private and [hybrid cloud|hybrid]." If you have worked with virtualization, then the concept of cloud is not completely alien to you. With virtualization, you can group a bunch of powerful hardware together, using a hypervisor. A hypervisor is a kind of software, operating system, or firmware that allows you to run virtual machines. Some of the popular Hypervisors on the market are VMware ESX or Microsoft's Hyper-V. Then, you can use this powerful hardware to run a set of virtual servers or guests. The guests share the resources of the host in order to execute and provide the services and computing resources of your IT department. The IT department takes care of everything from maintaining the hypervisor hosts to managing and maintaining the virtual servers and guests. The internal IT department does all the work. This is sometimes termed as a private cloud. Third-party suppliers, such as Microsoft, VMware, and Amazon, have a public cloud offering. With a public cloud, some computing services are provided to you on the Internet, and you can pay for what you use, which is like a utility bill. For example, let's take the utilities you use at home. This model can be really useful for start-up business that might not have an accurate demand forecast for their services, or the demand may change very quickly. Cloud computing can also be very useful for established businesses, who would like to make use of the elastic billing model. The more services you consume, the more you pay when you get billed at the end of the month. There are various types of public cloud offerings and services from a number of different providers. The TechNet top ten cloud providers are as follows: VMware Microsoft Bluelock Citrix Joyent Terrmark Salesforce.com Century Link RackSpace Amazon Web Services It is interesting to read that in 2013, Microsoft was only listed ninth in the list. With a new CEO, Microsoft has taken a new direction and put its Azure cloud offering at the heart of the business model. To quote one TechNet 2014 attendee: "TechNet this year was all about Azure, even the on premises stuff was built on the Azure model" With a different direction, it seems pretty clear that Microsoft is investing heavily in its cloud offering, and this will be further enhanced with further investment. This will allow a hybrid cloud environment, a combination of on-premises and public cloud, to be combined to offer organizations that ultimate flexibility when it comes to consuming IT resources. Services offered The term cloud is used to describe a variety of service offerings from multiple providers. You could argue, in fact, that the term cloud doesn't actually mean anything specific in terms of the service that you're consuming. It is, in fact, just a term that means you are consuming an IT service from a provider. Be it an internal IT department in the form of a private cloud or a public offering from some cloud provider, a public cloud, or it could be some combination of both in the form of a hybrid cloud. So, then what are the services that cloud providers offer? Virtualization and on-premises technology Most business even in today's cloudy environment has some on-premises technology. Until virtualization became popular and widely deployed several years ago, it was very common to have a one-to-one relationship between a physical hardware server with its own physical resources, such as CPU, RAM, storage, and the operating system installed on the physical server. It became clear that in this type of environment, you would need a lot of physical servers in your data center. An expanding and sometimes, a sprawling environment brings its own set of problems. The servers need cooling and heat management as well as a power source, and all the hardware and software needs to be maintained. Also, in terms of utilization, this model left lots of resources under-utilized: Virtualization changed this to some extent. With virtualization, you can create several guests or virtual servers that are configured to share the resources of the underlying host, each with their own operating system installed. It is possible to run both a Windows and Linux guest on the same physical host using virtualization. This allows you to maximize the resource utilization and allows your business to get a better return on investment on its hardware infrastructure: Virtualization is very much a precursor to cloud; many virtualized environments are sometimes called private clouds, so having an understanding of virtualization and how it works will give you a good grounding in some of the concepts of a cloud-based infrastructure. Software as a service (SaaS) SaaS is a subscription where you need to pay to use the software for the time that you're using it. You don't own any of the infrastructures, and you don't have to manage any of the servers or operating systems, you simply consume the software that you will be using. You can think of SaaS as like taking a taxi ride. When you take a taxi ride, you don't own the car, you don't need to maintain the car, and you don't even drive the car. You simply tell the taxi driver or his company when and where you want to travel somewhere, and they will take care of getting you there. The longer the trip, that is, the longer you use the taxi, the more you pay. An example of Microsoft's Software as a service would be the Azure SQL Database. The following diagram shows the cloud-based SQL database: Microsoft offers customers a SQL database that is fully hosted and maintained in Microsoft data centers, and the customer simply has to make use of the service and the database. So, we can compare this to having an on-premises database. To have an on-premises database, you need a Windows Server machine (physical or virtual) with the appropriate version of SQL Server installed. The server would need enough CPU, RAM, and storage to fulfill the needs of your database, and you need to manage and maintain the environment, applying various patches to the operating systems as they become available, installing, and testing various SQL Server service packs as they become available, and all the while, your application makes use of the database platform. With the SQL Azure database, you have no overhead, you simply need to connect to the Microsoft Azure portal and request a SQL database by following the wizard: Simply, give the database a name. In this case, it's called Helpdesk, select the service tier you want. In this example, I have chosen the Basic service tier. The service tier will define things, such as the resources available to your database, and impose limits, in terms of database size. With the Basic tier, you have a database size limit of 2 GB. You can specify the server that you want to create your database with, accept the defaults on the other settings, click on the check button, and the database gets created: It's really that simple. You will then pay for what you use in terms of database size and data access. In a later section, you will see how to set up a Microsoft Azure account. Platform as a service (PaaS) With PaaS, you rent the hardware, operating system, storage, and network from the public cloud service provider. PaaS is an offshoot of SaaS. Initially, SaaS didn't take off quickly, possibly because of the lack of control that IT departments and business thought they were going to suffer as a result of using the SaaS cloud offering. Going back to the transport analogy, you can compare PaaS to car rentals. When you rent a car, you don't need to make the car, you don't need to own the car, and you have no responsibility to maintain the car. You do, however, need to drive the car if you are going to get to your required destination. In PaaS terms, the developer and the system administrator have slightly more control over how the environment is set up and configured but still much of the work is taken care of by the cloud service provider. So, the hardware, operating system, and all the other components that run your application are managed and taken care of by the cloud provider, but you get a little more control over how things are configured. A geographically dispersed website would be a good example of an application offered on a PaaS offering. Infrastructure as a service (IaaS) With IaaS, you have much more control over the environment, and everything is customizable. Going with the transport analogy again, you can compare it to buying a car. The service provides you with the car upfront, and you are then responsible for using the car to ensure that it gets you from A to B. You are also responsible to fix the car if something goes wrong, and also ensure that the car is maintained by servicing it regularly, adding fuel, checking the tyre pressure, and so on. You have more control, but you also have more to do in terms of maintenance. Microsoft Azure has an offering. You can deploy a virtual machine, you can specify what OS you want, how much RAM you want the virtual machine to have, you can specify where the server will sit in terms of Microsoft data centers, and you can set up and configure recoverability and high availability for your Azure virtual machine: Hybrid environments With a hybrid environment, you get a combination of on-premises infrastructure and cloud services. It allows you to flexibly add resilience and high availability to your existing infrastructure. It's perfectly possible for the cloud to act as a disaster recovery site for your existing infrastructure. Microsoft Azure In order to work with the examples in this article, you need sign up for a Microsoft account. You can visit http://azure.microsoft.com/, and create an account all by yourself by completing the necessary form as follows: Here, you simply enter your details; you can use your e-mail address as your username. Enter the credentials specified. Return to the Azure website, and if you want to make use of the free trial, click on the free trial link. Currently, you get $125 worth of free Azure services. Once you have clicked on the free trial link, you will have to verify your details. You will also need to enter a credit card number and its details. Microsoft assures that you won't be charged during the free trial. Enter the appropriate details and click on Sign Up: Summary In this article, we looked at and discussed some of the terminology around the cloud. From the services offered to some of the specific features available in Microsoft Azure, you should be able to differentiate between a public and private cloud. You can also now differentiate between some of the public cloud offerings. Resources for Article: Further resources on this subject: Windows Azure Service Bus: Key Features [article] Digging into Windows Azure Diagnostics [article] Using the Windows Azure Platform PowerShell Cmdlets [article]
Read more
  • 0
  • 0
  • 8623

article-image-mailing-spring-mail
Packt
04 Jun 2015
19 min read
Save for later

Mailing with Spring Mail

Packt
04 Jun 2015
19 min read
In this article, by Anjana Mankale, author of the book Mastering Spring Application Development we shall see how we can use the Spring mail template to e-mail recipients. We shall also demonstrate using Spring mailing template configurations using different scenarios. (For more resources related to this topic, see here.) Spring mail message handling process The following diagram depicts the flow of a Spring mail message process. With this, we can clearly understand the process of sending mail using a Spring mailing template. A message is created and sent to the transport protocol, which interacts with internet protocols. Then, the message is received by the recipients. The Spring mail framework requires a mail configuration, or SMTP configuration, as the input and message that needs to be sent. The mail API interacts with internet protocols to send messages. In the next section, we shall look at the classes and interfaces in the Spring mail framework. Interfaces and classes used for sending mails with Spring The package org.springframework.mail is used for mail configuration in the spring application. The following are the three main interfaces that are used for sending mail: MailSender: This interface is used to send simple mail messages. JavaMailSender: This interface is a subinterface of the MailSender interface and supports sending mail messages. MimeMessagePreparator: This interface is a callback interface that supports the JavaMailSender interface in the preparation of mail messages. The following classes are used for sending mails using Spring: SimpleMailMessage: This is a class which has properties such as to, from, cc, bcc, sentDate, and many others. The SimpleMailMessage interface sends mail with MailSenderImp classes. JavaMailSenderImpl: This class is an implementation class of the JavaMailSender interface. MimeMessageHelper: This class helps with preparing MIME messages. Sending mail using the @Configuration annotation We shall demonstrate here how we can send mail using the Spring mail API. First, we provide all the SMTP details in the .properties file and read it to the class file with the @Configuration annotation. The name of the class is MailConfiguration. mail.properties file contents are shown below: mail.protocol=smtp mail.host=localhost mail.port=25 mail.smtp.auth=false mail.smtp.starttls.enable=false mail.from=me@localhost mail.username= mail.password=   @Configuration @PropertySource("classpath:mail.properties") public class MailConfiguration { @Value("${mail.protocol}") private String protocol; @Value("${mail.host}") private String host; @Value("${mail.port}") private int port; @Value("${mail.smtp.auth}") private boolean auth; @Value("${mail.smtp.starttls.enable}") private boolean starttls; @Value("${mail.from}") private String from; @Value("${mail.username}") private String username; @Value("${mail.password}") private String password;   @Bean public JavaMailSender javaMailSender() {    JavaMailSenderImpl mailSender = new JavaMailSenderImpl();    Properties mailProperties = new Properties();    mailProperties.put("mail.smtp.auth", auth);    mailProperties.put("mail.smtp.starttls.enable", starttls);    mailSender.setJavaMailProperties(mailProperties);    mailSender.setHost(host);    mailSender.setPort(port);    mailSender.setProtocol(protocol);    mailSender.setUsername(username);    mailSender.setPassword(password);    return mailSender; } } The next step is to create a rest controller to send mail; to do so, click on Submit. We shall use the SimpleMailMessage interface since we don't have any attachment. @RestController class MailSendingController { private final JavaMailSender javaMailSender; @Autowired MailSubmissionController(JavaMailSender javaMailSender) {    this.javaMailSender = javaMailSender; } @RequestMapping("/mail") @ResponseStatus(HttpStatus.CREATED) SimpleMailMessage send() {    SimpleMailMessage mailMessage = new SimpleMailMessage();    mailMessage.setTo("packt@localhost");    mailMessage.setReplyTo("anjana@localhost");    mailMessage.setFrom("Sonali@localhost");    mailMessage.setSubject("Vani veena Pani");  mailMessage.setText("MuthuLakshmi how are you?Call      Me Please [...]");    javaMailSender.send(mailMessage);    return mailMessage; } } Sending mail using MailSender and Simple Mail Message with XML configuration "Simple mail message" means the e-mail sent will only be text-based with no HTML formatting, no images, and no attachments. In this section, consider a scenario where we are sending a welcome mail to the user as soon as the user gets their order placed in the application. In this scenario, the mail will be sent after the database insertion operation is successful. Create a separate folder, called com.packt.mailService, for the mail service. The following are the steps for sending mail using the MailSender interface and SimpleMailMessage class. Create a new Maven web project with the name Spring4MongoDB_MailChapter3. We have also used the same Eshop db database with MongoDB for CRUD operations on Customer, Order, and Product. We have also used the same mvc configurations and source files. Use the same dependencies as used previously. We need to add dependencies to the pom.xml file: <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-mail</artifactId> <version>3.0.2.RELEASE</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1-rev-1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.3</version> </dependency> Compile the Maven project. Create a separate folder called com.packt.mailService for the mail service. Create a simple class named MailSenderService and autowire the MailSender and SimpleMailMessage classes. The basic skeleton is shown here: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String    subject, String body){    /*Code */ }   } Next, create an object of SimpleMailMessage and set mail properties, such as from, to, and subject to it. public void sendmail(String from, String to, String subject, String body){ SimpleMailMessage message=new SimpleMailMessage(); message.setFrom(from); message.setSubject(subject); message.setText(body); mailSender.send(message); } We need to configure the SMTP details. Spring Mail Support provides this flexibility of configuring SMTP details in the XML file. <bean id="mailSender" class="org.springframework.mail.javamail. JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com" /> <property name="port" value="587" /> <property name="username" value="username" /> <property name="password" value="password" />   <property name="javaMailProperties"> <props>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop> </props> </property> </bean>   <bean id="mailSenderService" class=" com.packt.mailserviceMailSenderService "> <property name="mailSender" ref="mailSender" /> </bean>   </beans> We need to send mail to the customer after the order has been placed successfully in the MongoDB database. Update the addorder() method as follows: @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order")    Order order,Map<String, Object> model) {    Customer cust=new Customer();    cust=customer_respository.getObject      (order.getCustomer().getCust_id());      order.setCustomer(cust);    order.setProduct(product_respository.getObject      (order.getProduct().getProdid()));    respository.saveObject(order);    mailSenderService.sendmail      ("anjana.mprasad@gmail.com",cust.getEmail(),      "Dear"+cust.getName()+"Your order      details",order.getProduct().getName()+"-price-"+order      .getProduct().getPrice());    model.put("customerList", customerList);    model.put("productList", productList);    return "order"; } Sending mail to multiple recipients If you want to intimate the user regarding the latest products or promotions in the application, you can create a mail sending group and send mail to multiple recipients using Spring mail sending support. We have created an overloaded method in the same class, MailSenderService, which will accept string arrays. The code snippet in the class will look like this: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String subject,    String body){    /*Code */ }   public void sendmail(String from, String []to, String subject,    String body){    /*Code */ }   } The following is the code snippet for listing the set of users from MongoDB who have subscribed to promotional e-mails: public List<Customer> getAllObjectsby_emailsubscription(String    status) {    return mongoTemplate.find(query(      where("email_subscribe").is("yes")), Customer.class); } Sending MIME messages Multipurpose Internet Mail Extension (MIME) allows attachments to be sent over the Internet. This class just demonstrates how we can send mail with MIME messages. Using a MIME message sender type class is not advisible if you are not sending any attachments with the mail message. In the next section, we will look at the details of how we can send mail with attachments. Update the MailSenderService class with another method. We have used the MIME message preparator and have overridden the prepare method() to set properties for the mail. public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage;   public void sendmail(String from, String to, String subject,    String body){    /*Code */ } public void sendmail(String from, String []to, String subject,    String body){    /*Code */ } public void sendmime_mail(final String from, final String to,    final String subject, final String body) throws MailException{    MimeMessagePreparator message = new MimeMessagePreparator() {      public void prepare(MimeMessage mimeMessage)        throws Exception {        mimeMessage.setRecipient(Message.RecipientType.TO,new          InternetAddress(to));        mimeMessage.setFrom(new InternetAddress(from));        mimeMessage.setSubject(subject);        mimeMessage.setText(msg);    } }; mailSender.send(message); } Sending attachments with mail We can also attach various kinds of files to the mail. This functionality is supported by the MimeMessageHelper class. If you just want to send a MIME message without an attachment, you can opt for MimeMesagePreparator. If the requirement is to have an attachment to be sent with the mail, we can go for the MimeMessageHelper class with file APIs. Spring provides a file class named org.springframework.core.io.FileSystemResource, which has a parameterized constructor that accepts file objects. public class SendMailwithAttachment { public static void main(String[] args)    throws MessagingException {    AnnotationConfigApplicationContext ctx =      new AnnotationConfigApplicationContext();    ctx.register(AppConfig.class);    ctx.refresh();    JavaMailSenderImpl mailSender =      ctx.getBean(JavaMailSenderImpl.class);    MimeMessage mimeMessage = mailSender.createMimeMessage();    //Pass true flag for multipart message    MimeMessageHelper mailMsg = new MimeMessageHelper(mimeMessage,      true);    mailMsg.setFrom("ANJUANJU02@gmail.com");    mailMsg.setTo("RAGHY03@gmail.com");    mailMsg.setSubject("Test mail with Attachment");    mailMsg.setText("Please find Attachment.");    //FileSystemResource object for Attachment    FileSystemResource file = new FileSystemResource(new      File("D:/cp/ GODGOD. jpg"));    mailMsg.addAttachment("GODGOD.jpg", file);    mailSender.send(mimeMessage);    System.out.println("---Done---"); }   } Sending preconfigured mail In this example, we shall provide a message that is to be sent in the mail, and we will configure it in an XML file. Sometimes when it comes to web applications, you may have to send messages on maintenance. Think of a scenario where the content of the mail changes, but the sender and receiver are preconfigured. In such a case, you can add another overloaded method to the MailSender class. We have fixed the subject of the mail, and the content can be sent by the user. Think of it as "an application which sends mails to users whenever the build fails". <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/ context/spring-context-3.0.xsd"> <context:component-scan base-package="com.packt" /> <!-- SET default mail properties --> <bean id="mailSender" class= "org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com"/> <property name="port" value="25"/> <property name="username" value="anju@gmail.com"/> <property name="password" value="password"/> <property name="javaMailProperties"> <props>    <prop key="mail.transport.protocol">smtp</prop>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop>    <prop key="mail.debug">true</prop> </props> </property> </bean>   <!-- You can have some pre-configured messagess also which are ready to send --> <bean id="preConfiguredMessage" class= "org.springframework.mail.SimpleMailMessage"> <property name="to" value="packt@gmail.com"></property> <property name="from" value="anju@gmail.com"></property> <property name="subject" value="FATAL ERROR- APPLICATION AUTO    MAINTENANCE STARTED-BUILD FAILED!!"/> </bean> </beans> Now we shall sent two different bodies for the subjects. public class MyMailer { public static void main(String[] args){    try{      //Create the application context      ApplicationContext context = new        FileSystemXmlApplicationContext(        "application-context.xml");        //Get the mailer instance      ApplicationMailer mailer = (ApplicationMailer)        context.getBean("mailService");      //Send a composed mail      mailer.sendMail("nikhil@gmail.com", "Test Subject",        "Testing body");    }catch(Exception e){      //Send a pre-configured mail      mailer.sendPreConfiguredMail("build failed exception occured        check console or logs"+e.getMessage());    } } } Using Spring templates with Velocity to send HTML mails Velocity is the templating language provided by Apache. It can be integrated into the Spring view layer easily. The latest Velocity version used during this book is 1.7. In the previous section, we demonstrated using Velocity to send e-mails using the @Bean and @Configuration annotations. In this section, we shall see how we can configure Velocity to send mails using XML configuration. All that needs to be done is to add the following bean definition to the .xml file. In the case of mvc, you can add it to the dispatcher-servlet.xml file. <bean id="velocityEngine" class= "org.springframework.ui.velocity.VelocityEngineFactoryBean"> <property name="velocityProperties"> <value>    resource.loader=class    class.resource.loader.class=org.apache.velocity    .runtime.resource.loader.ClasspathResourceLoader </value> </property> </bean> Create a new Maven web project with the name Spring4MongoDB_Mail_VelocityChapter3. Create a package and name it com.packt.velocity.templates. Create a file with the name orderconfirmation.vm. <html> <body> <h3> Dear Customer,<h3> <p>${customer.firstName} ${customer.lastName}</p> <p>We have dispatched your order at address.</p> ${Customer.address} </body> </html> Use all the dependencies that we have added in the previous sections. To the existing Maven project, add this dependency: <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> To ensure that Velocity gets loaded on application startup, we shall create a class. Let's name the class VelocityConfiguration.java. We have used the annotations @Configuration and @Bean with the class. import java.io.IOException; import java.util.Properties;   import org.apache.velocity.app.VelocityEngine; import org.apache.velocity.exception.VelocityException; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.ui.velocity.VelocityEngineFactory; @Configuration public class VelocityConfiguration { @Bean public VelocityEngine getVelocityEngine() throws VelocityException, IOException{    VelocityEngineFactory velocityEngineFactory = new      VelocityEngineFactory();    Properties props = new Properties();    props.put("resource.loader", "class");    props.put("class.resource.loader.class",      "org.apache.velocity.runtime.resource.loader." +      "ClasspathResourceLoader");    velocityEngineFactory.setVelocityProperties(props);    return factory.createVelocityEngine(); } } Use the same MailSenderService class and add another overloaded sendMail() method in the class. public void sendmail(final Customer customer){ MimeMessagePreparator preparator = new    MimeMessagePreparator() {    public void prepare(MimeMessage mimeMessage)    throws Exception {      MimeMessageHelper message =        new MimeMessageHelper(mimeMessage);      message.setTo(user.getEmailAddress());      message.setFrom("webmaster@packt.com"); // could be        parameterized      Map model = new HashMap();      model.put("customer", customer);      String text =        VelocityEngineUtils.mergeTemplateIntoString(        velocityEngine, "com/packt/velocity/templates/        orderconfirmation.vm", model);      message.setText(text, true);    } }; this.mailSender.send(preparator); } Update the controller class to send mail using the Velocity template. @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order") Order order,Map<String, Object> model) { Customer cust=new Customer(); cust=customer_respository.getObject(order.getCustomer()    .getCust_id());   order.setCustomer(cust); order.setProduct(product_respository.getObject    (order.getProduct().getProdid())); respository.saveObject(order); // to send mail using velocity template. mailSenderService.sendmail(cust);   return "order"; } Sending Spring mail over a different thread There are other options for sending Spring mail asynchronously. One way is to have a separate thread to the mail sending job. Spring comes with the taskExecutor package, which offers us a thread pooling functionality. Create a class called MailSenderAsyncService that implements the MailSender interface. Import the org.springframework.core.task.TaskExecutor package. Create a private class called MailRunnable. Here is the complete code for MailSenderAsyncService: public class MailSenderAsyncService implements MailSender{ @Resource(name = "mailSender") private MailSender mailSender;   private TaskExecutor taskExecutor;   @Autowired public MailSenderAsyncService(TaskExecutor taskExecutor){    this.taskExecutor = taskExecutor; } public void send(SimpleMailMessage simpleMessage) throws    MailException {    taskExecutor.execute(new MailRunnable(simpleMessage)); }   public void send(SimpleMailMessage[] simpleMessages)    throws MailException {    for (SimpleMailMessage message : simpleMessages) {      send(message);    } }   private class SimpleMailMessageRunnable implements    Runnable {    private SimpleMailMessage simpleMailMessage;    private SimpleMailMessageRunnable(SimpleMailMessage      simpleMailMessage) {      this.simpleMailMessage = simpleMailMessage;    }      public void run() {    mailSender.send(simpleMailMessage);    } } private class SimpleMailMessagesRunnable implements    Runnable {    private SimpleMailMessage[] simpleMessages;    private SimpleMailMessagesRunnable(SimpleMailMessage[]      simpleMessages) {      this.simpleMessages = simpleMessages;    }      public void run() {      mailSender.send(simpleMessages);    } } } Configure the ThreadPool executor in the .xml file. <bean id="taskExecutor" class="org.springframework. scheduling.concurrent.ThreadPoolTaskExecutor" p_corePoolSize="5" p_maxPoolSize="10" p_queueCapacity="100"    p_waitForTasksToCompleteOnShutdown="true"/> Test the source code. import javax.annotation.Resource;   import org.springframework.mail.MailSender; import org.springframework.mail.SimpleMailMessage; import org.springframework.test.context.ContextConfiguration;   @ContextConfiguration public class MailSenderAsyncService { @Resource(name = " mailSender ") private MailSender mailSender; public void testSendMails() throws Exception {    SimpleMailMessage[] mailMessages = new      SimpleMailMessage[5];      for (int i = 0; i < mailMessages.length; i++) {      SimpleMailMessage message = new SimpleMailMessage();      message.setSubject(String.valueOf(i));      mailMessages[i] = message;    }    mailSender.send(mailMessages); } public static void main (String args[]){    MailSenderAsyncService asyncservice=new      MailSenderAsyncService();    Asyncservice. testSendMails(); } } Sending Spring mail with AOP We can also send mails by integrating the mailing functionality with Aspect Oriented Programming (AOP). This can be used to send mails after the user registers with an application. Think of a scenario where the user receives an activation mail after registration. This can also be used to send information about an order placed on an application. Use the following steps to create a MailAdvice class using AOP: Create a package called com.packt.aop. Create a class called MailAdvice. public class MailAdvice { public void advice (final ProceedingJoinPoint    proceedingJoinPoint) {    new Thread(new Runnable() {    public void run() {      System.out.println("proceedingJoinPoint:"+        proceedingJoinPoint);      try {        proceedingJoinPoint.proceed();      } catch (Throwable t) {        // All we can do is log the error.         System.out.println(t);      }    } }).start(); } } This class creates a new thread and starts it. In the run method, the proceedingJoinPoint.proceed() method is called. ProceddingJoinPoint is a class available in AspectJ.jar. Update the dispatcher-servlet.xml file with aop configurations. Update the xlmns namespace using the following code: advice"> <aop:around method="fork"    pointcut="execution(* org.springframework.mail    .javamail.JavaMailSenderImpl.send(..))"/> </aop:aspect> </aop:config> Summary In this article, we demonstrated how to create a mailing service and configure it using Spring API. We also demonstrated how to send mails with attachments using MIME messages. We also demonstrated how to create a dedicated thread for sending mails using ExecutorService. We saw an example in which mail can be sent to multiple recipients, and saw an implementation of using the Velocity engine to create templates and send mails to recipients. In the last section, we demonstrated how the Spring framework supported mails can be sent using Spring AOP and threads. Resources for Article: Further resources on this subject: Time Travelling with Spring [article] Welcome to the Spring Framework [article] Creating a Spring Application [article]
Read more
  • 0
  • 0
  • 18567
Modal Close icon
Modal Close icon