Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud Computing

121 Articles
article-image-load-balancing-and-ha-owncloud
Packt
20 Aug 2013
13 min read
Save for later

Load Balancing and HA for ownCloud

Packt
20 Aug 2013
13 min read
(For more resources related to this topic, see here.) The key strategy If we look closely for the purpose of load balancing, we will see three components in an ownCloud instance, which are as follows: A user data storage (till now we were using system hard disk) A web server, for example Apache or IIS A database, MySQL would be a good choice for demonstration The user data storage Whenever user creates any file or directory in ownCloud or uploads something, the data gets stored in the data directory. If we have to ensure that our ownCloud instance is capable to store the data then we have to make this redundant. Lucky for us, ownCloud supports a lot of other options out of the box, other than the local disk storage. We can use a Samba backend, an ftp backend, an OpenStack Swift backend, Amazon S3, Web DAV, and a lot more. Configuring WebDAV Web Distributed Authoring and Versioning (WebDAV) is an extension of HTTP. It is described by the IETF in RFC 4918 at http://tools.ietf.org/html/rfc4918. It provides the functionality of editing and managing documents over the web. It essentially makes the web readable and writable. To enable custom backend support, we will first have to go to the Familiar Apps section, and need to enable the External Storage Support app. After this app is enabled, when we open the ownCloud admin panel, we will see an external storage section on the page. Just choose WebDAV from the drop-down menu and fill in the credentials. Choose mount point as 0 and put the root as $user/. We are doing this so that for each user, a directory will be created on the WebDAV with their username and whenever users log in, they will be sent to this directory. Just to verify, check out the config/mount.php fi le for ownCloud. The web server Assuming that we have taken care of backend storage, let's now handle the frontend web server. A very obvious way is to do the DNS level load balancing by round robin or geographical distribution. In round-robin DNS scheme the resolution of a name returns a list IP addresses instead of a single IP. These IP addresses may be returned in the round-robin fashion, which means that every time the IP addresses will be permuted in the list. This helps in distribution of the traffic since usually the first IP is used. Another way to give out the list is to match the IP address of the client to the closest IP in the list, and then make that the first IP in the response of the DNS query. The biggest advantage of DNS-based load distribution is that it is application agnostic. It does not care if the request is for an Apache server running PHP or an IIS server running ASP. It just rotates the IP, and the server is responsible to handle the request appropriately. So far, it sounds all good but then why don't we use it all the time? Is it sufficient to balance the entire load? Well, this strategy is great for load distribution, but what will happen in case one of the servers fails? We will run into a major problem then, because usually DNS servers do not do health checks. So in case one if our servers fail, we have to either fix it very fast, which is not easy always or we have to remove that IP from the DNS, but then the DNS answers are cached by several intermediate caching (only DNS servers). They will continue to serve the stale IPs and our clients will continue visiting bad server. Another way is to move the IP from the bad server to the good server. So now this good server will have two IP addresses. That means that it has to handle twice the load, since DNS will keep on sending traffic after permuting the IPs in round-robin fashion. Due to these and several other problems with DNS level load balancing, we generally either avoid using it or use it along with other load-balancing mechanisms. Load balancing Apache is quite easy using Windows GUI For the sake of this example, let's assume that we have ownCloud served by two Apache web servers at 192.168.10.10 and 192.168.10.11. Starting with Apache 2.1, a module known as mod_proxy_balancer was introduced. For CentOS, the default apache package ships this module with itself, so installing is not a problem. If we have Apache running from the yum repo, then we already have this module with us. Now, mod_proxy_balancer supports three algorithms for load distribution, which are as follows: Request Counting With this algorithm, incoming requests are distributed among backend workers in such a way that each backend gets a proportional number of requests defined in the configuration by the loadfactor variable. For example, consider this Apache config snippet: <Proxy balancer://ownCloud>BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2ProxySet lbmethod=byrequests</Proxy> In this example, one request out of every four will be sent to 192.168.10.11, and three will be sent to 192.168.10.10. This might be an appropriate configuration for a site with two servers, one of which is more powerful than the other. Weighted Traffic Counting The Weighted Traffic Counting algorithm is similar to Request Counting algorithm with a minor difference, that is, Weighted Traffic Counting considers the number of bytes instead of number of requests. In the following configuration example, the number of bytes processed by 192.168.10.10 will be three times that of 192.168.10.11: <Proxy balancer://ownCloud>BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2ProxySet lbmethod=bytraffic</Proxy> Pending Request Counting The Pending Request Counting algorithm is the latest and the most sophisticated algorithm provided by Apache for load balancing. It is available from Apache 2.2.10 onward. In this algorithm, the scheduler keeps track of the number of requests that are assigned to each backend worker at any given time. Each new incoming request will be sent to the backend that has a least number of pending requests. In other words, to the backend worker that is relatively least loaded. This helps in keeping the request queues even among the backend workers, and each request generally goes to the worker that can process it the fastest. If two workers are equally light-loaded, the scheduler uses the Request Counting algorithm to break the tie, which is as follows: <Proxy balancer://ownCloud>BalancerMember http://192.168.10.11/ # Balancer member 1BalancerMember http://192.168.10.10/ # Balancer member 2ProxySet lbmethod=bybusyness</Proxy> Enable the Balancer Manager Sometimes, we may need to change our load balancing configuration, but that may not be easy to do without affecting the running servers. For such situations, the Balancer Manager module provides a web interface to change the status of backend workers on the fly. We can use Balancer Manager to put a worker in offline mode or change its loadfactor, but we must have mod_status installed in order to use Balance Manager. A sample config, which should be defined in /etc/httpd/httpd.conf, might look similar to the following code: <Location /balancer-manager>SetHandler balancer-managerOrder Deny,AllowDeny from allAllow from .owncloudbook.com</Location> Once we add directives similar to the preceding ones to httpd.conf, and then restart Apache, we can open the Balancer Manager by pointing a browser at http://owncloudbook.com/balancer-manager. Load balancing IIS Load balancing IIS quite easily uses Windows GUI. Windows Server editions come with a set of nifty tools for this known as Network Load Balancer(NLB). It balances the load by distributing incoming requests among a cluster of servers. Each server in a cluster emits a heartbeat, a kind of "I am operational" message. NLB ensures that no request goes to a server which is not sending this heartbeat, thereby ensuring that all that the requests are processed by operational servers. Let's now configure the NLB by performing the following steps: We need to turn it on first. We can do so by following the given steps: Go to Server Manager. Click on the Features section in the left-side bar. Then click on the Add Features. Select Network Load Balancing from the list. Once we have chosen Network Load Balancing, we will click on Next >, and then click on the Install to get this feature on the servers. Once we are done here, we will open Network Load Balancing Manager from the Administrative Tools section in the Start menu. In the manager window, we need to right-click on the Network Load Balancing Clusters option to create a new cluster, as shown in the following screenshot: Now we need to give the address of the server which is actually running the web server, and then connect to it, as shown in the following screenshot: Choose the appropriate interface. In this example, we have only one, and then click on the Next > button. On the next window, we will be shown host parameters, where we have to assign a priority to this host, as shown in the following screenshot: Now click on the Add button, and a dialogue will open where we have to assign an IP, which will be shared by all the hosts, as shown in the following screenshot.(Network Load Balancing Manager will configure this IP on all the machines.) On the next dialogue choose a cluster IP, as shown in the following screenshot. This will be the IP, which will be used by the users to log in to the ownCloud. Now that we have given it an IP, we will define cluster parameters to use unicast. Multicasts and broadcasts can be used, but they are not supported by all vendors and require more effort. Now everything is done. We are ready to use the Network Load Balancing feature. These steps are to be repeated on all the machines which are going to be a part of this cluster. So there! We have also loaded balanced IIS. The MySQL database MySQL Cluster is a separate component of MySQL, which is not shipped with the standard MySQL server but can be downloaded freely from http://dev.mysql.com/downloads/cluster/. MySQL Cluster helps in better scalability and ensuring high uptime. It is write scalable and ACID compliant, and doesn't have a single disadvantage because of the way it is designed with multi masters and high distribution of data. This is perfect for our requirements, so let's start with its installation. Basic terminologies Management node: This node performs the basic management functions. It starts and stops other nodes and performs backup. It is always a good idea to start this node before starting anything else in the cluster. Data node: This node will store the cluster data. They should always be more than one to provide redundancy. SQL node: This node accesses the cluster data. It uses the NDBCLUSTER storage engine. The default MySQL server does not ship with the NDBCLUSTER storage engine and other required features. So it is mandatory to download a server binary, which can support MySQL Cluster feature. We have to download the appropriate source for MySQL Cluster from http://dev.mysql.com/downloads/cluster/, if Linux is the host OS or the binary if Windows is in consideration. For the purpose of this demonstration, we will assume one Management node, one SQL node, and two Data nodes. We will also make a note that node is a logical word here. It need not be a physical machine. In fact, they can reside on the same machine as separate processes, but then the whole purpose of high availability will be defeated. Let's start by installing the MySQL cluster nodes. Data node Setting up Data node is fairly simple. Just copy the ndbd and ndbmtd binaries from the bin directory of the archive to /usr/loca/bin/ and make them executable as follows: cp bin/ndbd /usr/local/bin/ndbdcp bin/ndbmtd /usr/local/bin/ndbmtdchmod +x bin/ndbd /usr/local/bin/ndbdchmod +x bin/ndbmtd /usr/local/bin/ndbmtd Management node Management node needs only two binaries, ndb_mgmd and ndb_mgm cp bin/ndb_mgm* /usr/local/binchmod +x /usr/local/bin/ndb_mgm* SQL node First of all, we need to create a user for MySQL as follows: useradd mysql Now extract the tar.gz archive file downloaded before. Conventionally, MySQL documentation uses /usr/local/ directory to unpack the archive, but it can be done anywhere. We'll follow MySQL conventions here and also create a symbolic link to ease the access and better manageability as follows: tar -C /usr/local -xzvf mysql-cluster-gpl-7.2.12-linux2.6.tar.gzln -s /usr/local/mysql-cluster-gpl-7.2.12-linux2.6-i686 /usr/local/mysql We need to set write permissions for MySQL user, which we created before, as follows: chown -R root /usr/local/mysqlchown -R mysql /usr/local/mysql/datachgrp -R mysql /usr/local/mysql The preceding commands will ensure that the permission to start and stop the MySQL instance's remains with the root user, but MySQL user can write data to the data directory. Now, change the directory to the scripts directory and create the system databases as follows: scripts/mysql_install_db --user=mysql Configuring the Data node and SQL node We can configure the Data node and SQL node as follows: vim /etc/my.cnf[mysqld]# Options for mysqld process:ndbcluster # run NDB storage engine[mysql_cluster]# Options for MySQL Cluster processes:ndb-connectstring=192.168.20.10 # location of management server Configuring the Management node We can configure the Management node as follows: vim /var/lib/mysql-cluster/config.ini[ndbd default]# Options affecting ndbd processes on all data nodes:NoOfReplicas=2 # Number of replicasDataMemory=200M # How much memory to allocate for data storageIndexMemory=50M # How much memory to allocate for index storage # For DataMemory and IndexMemory, we have used the # default values. Since the "world" database takes up # only about 500KB, this should be more than enough for # this example Cluster setup.[tcp default]# TCP/IP options:portnumber=2202 [ndb_mgmd]# Management process options:hostname=192.168.20.10 # Hostname or IP address of MGM nodedatadir=/var/lib/mysql-cluster # Directory for MGM node log files[ndbd]# Options for data node "A": # (one [ndbd] section per data node)hostname=192.168.20.12 # Hostname or IP addressdatadir=/usr/local/mysql/data # Directory for this data node's data files[ndbd]# Options for data node "B":hostname=192.168.0.40 # Hostname or IP addressdatadir=/usr/local/mysql/data # Directory for this data node's data files[mysqld]# SQL node options:hostname=192.168.20.11 # Hostname or IP address Summary Now we have gained an idea about how to ensure high availability of ownCloud server components. We have seen the load balancing for backend data store as well as frontend web server, and the database. We have seen some common ways and we can now provide a reliable ownCloud service to our users. Resources for Article: Further resources on this subject: Introduction to Cloud Computing with Microsoft Azure [Article] Cross-premise Connectivity [Article] Cloud-enabling Your Apps [Article]
Read more
  • 0
  • 0
  • 7704

article-image-ceph-instant-deployment
Packt
09 Feb 2015
14 min read
Save for later

Ceph Instant Deployment

Packt
09 Feb 2015
14 min read
In this article by Karan Singh, author of the book, Learning Ceph, we will cover the following topics: Creating a sandbox environment with VirtualBox From zero to Ceph – deploying your first Ceph cluster Scaling up your Ceph cluster – monitor and OSD addition (For more resources related to this topic, see here.) Creating a sandbox environment with VirtualBox We can test deploy Ceph in a sandbox environment using Oracle VirtualBox virtual machines. This virtual setup can help us discover and perform experiments with Ceph storage clusters as if we are working in a real environment. Since Ceph is an open source software-defined storage deployed on top of commodity hardware in a production environment, we can imitate a fully functioning Ceph environment on virtual machines, instead of real-commodity hardware, for our testing purposes. Oracle VirtualBox is a free software available at http://www.virtualbox.org for Windows, Mac OS X, and Linux. We must fulfil system requirements for the VirtualBox software so that it can function properly during our testing. We assume that your host operating system is a Unix variant; for Microsoft windows, host machines use an absolute path to run the VBoxManage command, which is by default c:Program FilesOracleVirtualBoxVBoxManage.exe. The system requirement for VirtualBox depends upon the number and configuration of virtual machines running on top of it. Your VirtualBox host should require an x86-type processor (Intel or AMD), a few gigabytes of memory (to run three Ceph virtual machines), and a couple of gigabytes of hard drive space. To begin with, we must download VirtualBox from http://www.virtualbox.org/ and then follow the installation procedure once this has been downloaded. We will also need to download the CentOS 6.4 Server ISO image from http://vault.centos.org/6.4/isos/. To set up our sandbox environment, we will create a minimum of three virtual machines; you can create even more machines for your Ceph cluster based on the hardware configuration of your host machine. We will first create a single VM and install OS on it; after this, we will clone this VM twice. This will save us a lot of time and increase our productivity. Let's begin by performing the following steps to create the first virtual machine: The VirtualBox host machine used throughout in this demonstration is a Mac OS X which is a UNIX-type host. If you are performing these steps on a non-UNIX machine that is, on Windows-based host then keep in mind that virtualbox hostonly adapter name will be something like VirtualBox Host-Only Ethernet Adapter #<adapter number>. Please run these commands with the correct adapter names. On windows-based hosts, you can check VirtualBox networking options in Oracle VM VirtualBox Manager by navigating to File | VirtualBox Settings | Network | Host-only Networks. After the installation of the VirtualBox software, a network adapter is created that you can use, or you can create a new adapter with a custom IP:For UNIX-based VirtualBox hosts # VBoxManage hostonlyif remove vboxnet1 # VBoxManage hostonlyif create # VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.57.1 --netmask 255.255.255.0 For Windows-based VirtualBox hosts # VBoxManage.exe hostonlyif remove "VirtualBox Host-Only Ethernet Adapter" # VBoxManage.exe hostonlyif create # VBoxManage hostonlyif ipconfig "VirtualBox Host-Only Ethernet Adapter" --ip 192.168.57.1 --netmask 255.255.255. VirtualBox comes with a GUI manager. If your host is running Linux OS, it should have the X-desktop environment (Gnome or KDE) installed. Open Oracle VM VirtualBox Manager and create a new virtual machine with the following specifications using GUI-based New Virtual Machine Wizard, or use the CLI commands mentioned at the end of every step: 1 CPU 1024 MB memory 10 GB X 4 hard disks (one drive for OS and three drives for Ceph OSD) 2 network adapters CentOS 6.4 ISO attached to VM The following is the step-by-step process to create virtual machines using CLI commands: Create your first virtual machine: # VBoxManage createvm --name ceph-node1 --ostype RedHat_64 --register # VBoxManage modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet1 For Windows VirtualBox hosts: # VBoxManage.exe modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 "VirtualBox Host-Only Ethernet Adapter" Create CD-Drive and attach CentOS ISO image to first virtual machine: # VBoxManage storagectl ceph-node1 --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on # VBoxManage storageattach ceph-node1 --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium CentOS-6.4-x86_64-bin-DVD1.iso Make sure you execute the preceding command from the same directory where you have saved CentOS ISO image or you can specify the location where you saved it. Create SATA interface, OS hard drive and attach them to VM; make sure the VirtualBox host has enough free space for creating vm disks. If not, select the host drive which have free space: # VBoxManage storagectl ceph-node1 --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on # VBoxManage createhd --filename OS-ceph-node1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium OS-ceph-node1.vdi Create SATA interface, first ceph disk and attach them to VM: # VBoxManage createhd --filename ceph-node1-osd1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium ceph-node1-osd1.vdi Create SATA interface, second ceph disk and attach them to VM: # VBoxManage createhd --filename ceph-node1-osd2.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium ceph-node1-osd2.vdi Create SATA interface, third ceph disk and attach them to VM: # VBoxManage createhd --filename ceph-node1-osd3.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium ceph-node1-osd3.vdi Now, at this point, we are ready to power on our ceph-node1 VM. You can do this by selecting the ceph-node1 VM from Oracle VM VirtualBox Manager, and then clicking on the Start button, or you can run the following command: # VBoxManage startvm ceph-node1 --type gui As soon as you start your VM, it should boot from the ISO image. After this, you should install CentOS on VM. If you are not already familiar with Linux OS installation, you can follow the documentation at https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/index.html. Once you have successfully installed the operating system, edit the network configuration of the machine: Edit /etc/sysconfig/network and change the hostname parameter HOSTNAME=ceph-node1 Edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and add: ONBOOT=yes BOOTPROTO=dhcp Edit the /etc/sysconfig/network-scripts/ifcfg-eth1 file and add:< ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.101 NETMASK=255.255.255.0 Edit the /etc/hosts file and add: 192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3 Once the network settings have been configured, restart VM and log in via SSH from your host machine. Also, test the Internet connectivity on this machine, which is required to download Ceph packages: # ssh root@192.168.57.101 Once the network setup has been configured correctly, you should shut down your first VM so that we can make two clones of your first VM. If you do not shut down your first VM, the cloning operation might fail. Create clone of ceph-node1 as ceph-node2: # VBoxManage clonevm --name ceph-node2 ceph-node1 --register Create clone of ceph-node1 as ceph-node3: # VBoxManage clonevm --name ceph-node3 ceph-node1 --register After the cloning operation is complete, you can start all three VMs: # VBoxManage startvm ceph-node1 # VBoxManage startvm ceph-node2 # VBoxManage startvm ceph-node3 Set up VM ceph-node2 with the correct hostname and network configuration: Edit /etc/sysconfig/network and change the hostname parameter: HOSTNAME=ceph-node2 Edit the /etc/sysconfig/network-scripts/ifcfg-<first interface name> file and add: DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a > Edit the /etc/sysconfig/network-scripts/ifcfg-<second interface name> file and add: DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.102 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a > Edit the /etc/hosts file and add: 192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3 After performing these changes, you should restart your virtual machine to bring the new hostname into effect. The restart will also update your network configurations. Set up VM ceph-node3 with the correct hostname and network configuration: Edit /etc/sysconfig/network and change the hostname parameter:HOSTNAME=ceph-node3 Edit the /etc/sysconfig/network-scripts/ifcfg-<first interface name> file and add: DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a > Edit the /etc/sysconfig/network-scripts/ifcfg-<second interface name> file and add: DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.103 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a > Edit the /etc/hosts file and add: 192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3 After performing these changes, you should restart your virtual machine to bring a new hostname into effect; the restart will also update your network configurations. At this point, we prepare three virtual machines and make sure each VM communicates with each other. They should also have access to the Internet to install Ceph packages. From zero to Ceph – deploying your first Ceph cluster To deploy our first Ceph cluster, we will use the ceph-deploy tool to install and configure Ceph on all three virtual machines. The ceph-deploy tool is a part of the Ceph software-defined storage, which is used for easier deployment and management of your Ceph storage cluster. Since we created three virtual machines that run CentOS 6.4 and have connectivity with the Internet as well as private network connections, we will configure these machines as Ceph storage clusters as mentioned in the following diagram: Configure ceph-node1 for an SSH passwordless login to other nodes. Execute the following commands from ceph-node1: While configuring SSH, leave the paraphrase empty and proceed with the default settings: # ssh-keygen Copy the SSH key IDs to ceph-node2 and ceph-node3 by providing their root passwords. After this, you should be able to log in on these nodes without a password: # ssh-copy-id ceph-node2 Installing and configuring EPEL on all Ceph nodes: Install EPEL which is the repository for installing extra packages for your Linux system by executing the following command on all Ceph nodes: # rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm Make sure the baserul parameter is enabled under the /etc/yum.repos.d/epel.repo file. The baseurl parameter defines the URL for extra Linux packages. Also make sure the mirrorlist parameter must be disabled (commented) under this file. Problems been observed during installation if the mirrorlist parameter is enabled under epel.repo file. Perform this step on all the three nodes. Install ceph-deploy on the ceph-node1 machine by executing the following command from ceph-node1: # yum install ceph-deploy Next, we will create a Ceph cluster using ceph-deploy by executing the following command from ceph-node1: # ceph-deploy new ceph-node1 ## Create a directory for ceph # mkdir /etc/ceph # cd /etc/ceph The new subcommand of ceph-deploy deploys a new cluster with ceph as the cluster name, which is by default; it generates a cluster configuration and keying files. List the present working directory; you will find the ceph.conf and ceph.mon.keyring files. In this testing, we will intentionally install the Emperor release (v0.72) of Ceph software, which is not the latest release. Later in this book, we will demonstrate the upgradation of Emperor to Firefly release of Ceph. To install Ceph software binaries on all the machines using ceph-deploy; execute the following command from ceph-node1:ceph-deploy install --release emperor ceph-node1 ceph-node2 ceph-node3 The ceph-deploy tool will first install all the dependencies followed by the Ceph Emperor binaries. Once the command completes successfully, check the Ceph version and Ceph health on all the nodes, as follows: # ceph –v Create your first monitor on ceph-node1: # ceph-deploy mon create-initial Once monitor creation is successful, check your cluster status. Your cluster will not be healthy at this stage: # ceph status Create an object storage device (OSD) on the ceph-node1 machine, and add it to the Ceph cluster executing the following steps: List the disks on VM: # ceph-deploy disk list ceph-node1 From the output, carefully identify the disks (other than OS-partition disks) on which we should create Ceph OSD. In our case, the disk names will ideally be sdb, sdc, and sdd. The disk zap subcommand will destroy the existing partition table and content from the disk. Before running the following command, make sure you use the correct disk device name. # ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd The osd create subcommand will first prepare the disk, that is, erase the disk with a filesystem, which is xfs by default. Then, it will activate the disk's first partition as data partition and second partition as journal: # ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd Check the cluster status for new OSD entries: # ceph status At this stage, your cluster will not be healthy. We need to add a few more nodes to the Ceph cluster so that it can set up a distributed, replicated object storage, and hence become healthy. Scaling up your Ceph cluster – monitor and OSD addition Now we have a single-node Ceph cluster. We should scale it up to make it a distributed, reliable storage cluster. To scale up a cluster, we should add more monitor nodes and OSD. As per our plan, we will now configure ceph-node2 and ceph-node3 machines as monitor as well as OSD nodes. Adding the Ceph monitor A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors that's more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster: The firewall rules should not block communication between Ceph monitor nodes. If they do, you need to adjust the firewall rules in order to let monitors form a quorum. Since this is our test setup, let's disable firewall on all three nodes. We will run these commands from the ceph-node1 machine, unless otherwise specified: # service iptables stop # chkconfig iptables off # ssh ceph-node2 service iptables stop # ssh ceph-node2 chkconfig iptables off # ssh ceph-node3 service iptables stop # ssh ceph-node3 chkconfig iptables off Deploy a monitor on ceph-node2 and ceph-node3: # ceph-deploy mon create ceph-node2 # ceph-deploy mon create ceph-node3 The deploy operation should be successful; you can then check your newly added monitors in the Ceph status: You might encounter warning messages related to clock skew on new monitor nodes. To resolve this, we need to set up Network Time Protocol (NTP) on new monitor nodes: # chkconfig ntpd on # ssh ceph-node2 chkconfig ntpd on # ssh ceph-node3 chkconfig ntpd on # ntpdate pool.ntp.org # ssh ceph-node2 ntpdate pool.ntp.org # ssh ceph-node3 ntpdate pool.ntp.org # /etc/init.d/ntpd start # ssh ceph-node2 /etc/init.d/ntpd start # ssh ceph-node3 /etc/init.d/ntpd start Adding the Ceph OSD At this point, we have a running Ceph cluster with three monitors OSDs. Now we will scale our cluster and add more OSDs. To accomplish this, we will run the following commands from the ceph-node1 machine, unless otherwise specified. We will follow the same method for OSD addition: # ceph-deploy disk list ceph-node2 ceph-node3 # ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd # ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd # ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd # ceph status Check the cluster status for a new OSD. At this stage, your cluster will be healthy with nine OSDs in and up: Summary The software-defined nature of Ceph provides a great deal of flexibility to its adopters. Unlike other proprietary storage systems, which are hardware dependent, Ceph can be easily deployed and tested on almost any computer system available today. Moreover, if getting physical machines is a challenge, you can use virtual machines to install Ceph, as mentioned in this article, but keep in mind that such a setup should only be used for testing purposes. In this article, we learned how to create a set of virtual machines using the VirtualBox software, followed by Ceph deployment as a three-node cluster using the ceph-deploy tool. We also added a couple of OSDs and monitor machines to our cluster in order to demonstrate its dynamic scalability. We recommend you deploy a Ceph cluster of your own using the instructions mentioned in this article. Resources for Article: Further resources on this subject: Linux Shell Scripting - various recipes to help you [article] GNU Octave: Data Analysis Examples [article] What is Kali Linux [article]
Read more
  • 0
  • 0
  • 6282

article-image-troubleshooting-storage-contention
Packt
14 Nov 2013
6 min read
Save for later

Troubleshooting Storage Contention

Packt
14 Nov 2013
6 min read
(For more resources related to this topic, see here.) Now that we have learned about the various tools we can use to troubleshoot vSphere Storage and tackled the common issues that appear when we are trying to connect to our datastores, it's time for us to look at another type of issue with storage: contention. Storage contention is one of the most common causes of problems inside a virtual environment and is almost always the cause of slowness and performance issues. One of the biggest benefits of virtualization is consolidation: the ability to take multiple workloads and run them on a smaller number of systems, clustered with shared resources, and with one management interface. That said, as soon as we begin to share these resources, contention is sure to occur. This article will help with some of the common issues we face pertaining to storage contention and performance. Identifying storage contention and performance issues One of the biggest causes of poor storage performance is quite often the result of high I/O latency values. Latency in its simplest definition is a measure of how long it takes for a single I/O request to occur from the standpoint of your virtualized applications. As we will find out later, vSphere further breaks the latency values down into more detailed and precise values based on individual components of the stack in order to aid us with troubleshooting. But is storage latency always a bad thing? The answer to that is "it depends". Obviously, a high latency value is one of the least desirable metrics in terms of storage devices, but in terms of applications, it really depends on the type of workload we are running. Heavily utilized databases, for instance, are usually very sensitive when it comes to latency, often requiring very low latency values before exhibiting timeouts and degradation of performance. There are however other applications, usually requiring throughput, which will not be as sensitive to latency and have a higher latency threshold. In all cases, we as vSphere administrators will always want to do our best to minimize storage latency and should be able to quickly identify issues related to latency. As a vSphere administrator, we need to be able to monitor latency in our vSphere environment. This is where esxtop can be our number one tool. We will focus on three counters: DAVG/cmd, KAVG/cmd, and GAVG/cmd, all of which are explained in the following table: When looking at the thresholds outlined in the preceding table, we have to understand that these are developed as more of a recommendation rather than a hard rule. Certainly, 25 ms of device latency isn't good, but it will affect our applications in different ways, sometimes bad, sometimes not at all. The following sections will outline how we can view latency statistics as they pertain to disk adapters, disk devices, and virtual machines. Disk adapter latency statistics By activating the disk adapter display in esxtop, we are able to view our latency statistics as they relate to our HBAs and paths. This is helpful in terms of troubleshooting as it allows us to determine if the issue resides only on a single HBA or a single path to our storage array, as shown in the following screenshot: Use the following steps to activate the disk adapter latency display: Start esxtop by executing the esxtop command. Press d to switch to the disk adapter display. Press f to select which columns you would like to display. Toggle the fields by pressing their corresponding letters. In order to view latency statistics effectively, we need to ensure that we have turned on Adapter Name (A), Path Name (B), and Overall Latency Stats (G) at the very least. esxtop counters are also available for read and write latency specifically along with the overall latency statistics. This can be useful when troubleshooting storage latency as you may be experiencing quite a bit more write latency than read latency which can help you isolate the problems to different storage components. Disk device latency statistics The disk device display is crucial when troubleshooting storage contention and latency issues as it allows us to segregate any issues that may be occurring on a LUN by LUN basis. Use the following steps to activate the disk device latency display: Start esxtop by executing the esxtop command. Press u to switch to the disk device display. Press f to select which columns you would like to display. Toggle the fields by pressing their corresponding letters. In order to view latency statistics effectively, we need to ensure that we have turned on Device Name (A) and Overall Latency Stats (I) at the very least. By default, the Device column is not long enough to display the full ID of each device. For troubleshooting, we will need the complete device ID. We can enlarge this column by pressing L and entering the length as an integer that we want. Virtual machine latency statistics The latency statistics displayed inside the virtual machine display are not displayed using the same column headers as the previous two views. Instead, they are displayed as LAT/rd and LAT/wr. These counters are measured in milliseconds and represent the amount of time it takes to issue an I/O request from the virtual machine. This is a great view that can be used to determine a couple of things. One, is it just one virtual machine that is experiencing latency? And two, is the latency observed on mostly reads or writes? Use the following steps to activate the virtual machine latency display: Start esxtop by executing the esxtop command. Press v to switch to the virtual machine disk display. Press f to select which columns you would like to display. Toggle the fields by pressing their corresponding letters. In order to view latency statistics effectively, we need to ensure that we have turned on VM Name (B), Read Latency Stats (G), and Write Latency Stats (H). Summary Storage contention and performance issues are one of the most common causes of slowness and outages within vSphere. Due to the number of software and hardware components involved in the vSphere storage stack, it's hard for us to pinpoint exactly where the root cause of a storage contention issue is occurring. Using some of the tools, examples, features, and common causes explained in this article, we should be able to isolate issues, making it easier for us to troubleshoot and resolve problems. Resources for Article : Further resources on this subject: Network Virtualization and vSphere [Article] Networking Performance Design [Article] vCloud Networks [Article]
Read more
  • 0
  • 0
  • 6155

article-image-how-storage-works-amazon
Packt
22 Jul 2011
9 min read
Save for later

How Storage Works on Amazon

Packt
22 Jul 2011
9 min read
Amazon Web Services: Migrating your .NET Enterprise Application Evaluate your Cloud requirements and successfully migrate your .NET Enterprise Application to the Amazon Web Services Platform Creating a S3 bucket with logging Logging provides detailed information on who accessed what data in your bucket and when. However, to turn on logging for a bucket, an existing bucket must have already been created to hold the logging information, as this is where AWS stores it. To create a bucket with logging, click on the Create Bucket button in the Buckets sidebar: This time, however, click on the Set Up Logging button . You will be presented with a dialog that allows you to choose the location for the logging information, as well as the prefix for your logging data: You will note that we have pointed the logging information back at the original bucket migrate_to_aws_01 Logging information will not appear immediately; however, a file will be created every few minutes depending on activity. The following screenshot shows an example of the files that are created: Before jumping right into the command-line tools, it should be noted that the AWS Console includes a Java-based multi-file upload utility that allows a maximum size of 300 MB for each file Using the S3 command-line tools Unfortunately, Amazon does not provide official command-line tools for S3 similar to the tools they have provided for EC2. However, there is an excellent simple free utility provided at o http://s3.codeplex.com, called S3.exe, that requires no installation and runs without the requirement of third-party packages. To install the program, just download it from the website and copy it to your C:AWS folder. Setting up your credentials with S3.exe Before we can run S3.exe, we first need to set up our credentials. To do that you will need to get your S3 Access Key and your S3 Secret Access Key from the credentials page of your AWS account. Browse to the following location in your browser, https://aws-portal.amazon.com/gp/aws/developer/account/ index.html?ie=UTF8&action=access-key and scroll down to the Access Credentials section: The Access Key is displayed in this screen; however, to get your Secret Access Key you will need to click on the Show link under the Secret Access Key heading. Run the following command to set up S3.exe: C:AWS>s3 auth AKIAIIJXIP5XC6NW3KTQ 9UpktBlqDroY5C4Q7OnlF1pNXtK332TslYFsWy9R To check that the tool has been installed correctly, run the s3 list command: C:AWS>s3 list You should get the following result: Copying files to S3 using S3.exe First, create a file called myfile.txt in the C:AWS directory. To copy this file to an S3 bucket that you own, use the following command: c:AWS>s3 put migrate_to_aws_02 myfile.txt This command copies the file to the migrate_to_aws_02 bucket with the default permissions of full control for the owner. You will need to refresh the AWS Console to see the file listed. (Move the mouse over the image to enlarge it.) Uploading larger files to AWS can be problematic, as any network connectivity issues during the upload will terminate the upload. To upload larger files, use the following syntax: C:AWS>s3 put migrate_to_aws_02/mybigfile/ mybigfile.txt /big This breaks the upload into small chunks, which can be reversed when getting the file back again. If you run the same command again, you will note that no chunks are uploaded. This is because S3.exe does not upload a chunk again if the checksum matches. Retrieving files from S3 using S3.exe Retrieving files from S3 is the reverse of copying files up to S3. To get a single file back use: C:AWS>s3 get migrate_to_aws_02/myfile.txt To get our big file back again use: C:AWS>s3 get migrate_to_aws_02/mybigfile/mybigfile.txt /big The S3.exe command automatically recombines our large file chunks back into a single file. Importing and exporting large amounts of data in and out of S3 Because S3 lives in the cloud within Amazon's data centers, it may be costly and time consuming to transfer large amounts of data to and from Amazon's data center to your own data center. An example of a large file transfer may be a large database backup file that you may wish to migrate from your own data center to AWS. Luckily for us, Amazon provides the AWS Import/Export Service for the US Standard and EU (Ireland) regions. However, this service is not supported for the other two regions at this time. The AWS Import service allows you to place your data on a portable hard drive and physically mail your hard disk to Amazon for uploading/downloading of your data from within Amazon's data center. Amazon provides the following recommendations for when to use this service. If your connection is 1.55Mbps and your data is 100GB or more If your connection is 10Mbps and your data is 600GB or more If your connection is 44.736Mbps and your data is 2TB or more If your connection is 100Mbps and your data is 5TB or more Make sure if you choose either the US West (California) or Asia Pacific (Singapore) regions that you do not need access to the AWS Import/ Export service, as it is not available in these regions. Setting up the Import/Export service To begin using this service once again, you will need to sign up for this service separately from your other services. Click on the Sign Up for AWS Import/Export button located on the product page http://aws.amazon.com/importexport, confirm the pricing and click on the Complete Sign Up button . Once again, you will need to wait for the service to become active: Current costs are:     Cost Type US East US West EU APAC Device handling $80.00 $80.00 $80.00 $99.00 Data loading time $2.49 per data loading hour $2.49 per data loading hour $2.49 per data loading hour $2.99 per data loading hour Using the Import/Export service To use the Import/Export service, first make sure that your external disk device conforms to Amazon's specifications. Confirming your device specifications The details are specified at http://aws.amazon.com/importexport/#supported_ devices, but essentially as long as it is a standard external USB 2.0 hard drive or a rack mountable device less than 8Us supporting eSATA then you will have no problems. Remember to supply a US power plug adapter if you are not located in the United States. Downloading and installing the command-line service tool Once you have confirmed that your device meets Amazon's specifications, download the command-line tools for the Import/Export service. At this time, it is not possible to use this service from the AWS Console. The tools are located at http:// awsimportexport.s3.amazonaws.com/importexport-webservice-tool.zip. Copy the .zip file to the C:AWS directory and unzip them, they will most likely end up in the following directory, C:AWSimportexport-webservice-tool. Creating a job To create a job, change directory to the C:AWSimportexport-webservice- tool directory, open notepad, and paste the following text into a new file: manifestVersion: 2.0 bucket: migrate_to_aws_01 accessKeyId: AKIAIIJXIP5XC6NW3KTQ deviceId: 12345678 eraseDevice: no returnAddress: name: Rob Linton street1: Level 1, Migrate St city: Amazon City stateOrProvince: Amazon postalCode: 1000 phoneNumber: 12345678 country: Amazonia customs: dataDescription: Test Data encryptedData: yes encryptionClassification: 5D992 exportCertifierName: Rob Linton requiresExportLicense: no deviceValue: 250.00 deviceCountryOfOrigin: China deviceType: externalStorageDevice Edit the text to reflect your own postal address, accessKeyId, bucket name, and save the file as MyManifest.txt. For more information on the customs configuration items refer to http://docs.amazonwebservices. com/AWSImportExport/latest/DG/index.html?ManifestFileRef_ international.html. If you are located outside of the United States a customs section in the manifest is a requirement. In the same folder open the AWSCredentials.properties file in notepad, and copy and paste in both your AWS Access Key ID and your AWS Secret Access Key. The file should look like this: # Fill in your AWS Access Key ID and Secret Access Key # http://aws.amazon.com/security-credentials accessKeyId:AKIAIIJXIP5XC6NW3KTQ secretKey:9UpktBlqDroY5C4Q7OnlF1pNXtK332TslYFsWy9R Now that you have created the required files, run the following command in the same directory. C:AWSimportexport-webservice-tool>java -jar lib/AWSImportExportWebServiceTool-1.0.jar CreateJob Import MyManifest.txt . (Move the mouse over the image to enlarge it.) Your job will be created along with a .SIGNATURE file in the same directory. Copying the data to your disk device Now you are ready to copy your data to your external disk device. However, before you start, it is mandatory to copy the .SIGNATURE file created in the previous step into the root directory of your disk device. Sending your disk device Once your data and the .SIGNATURE file have been copied to your disk device, print out the packing slip and fill out the details. The JOBID can be obtained in the output from your earlier create job request, in our example the JOBID is XHNHC. The DEVICE IDENTIFIER is the device serial number, which was entered into the manifest file, in our example it was 12345678. The packing slip must be enclosed in the package used to send your disk device.   Each package can have only one storage device and one packing slip, multiple storage devices must be sent separately. Address the package with the address output in the create job request: AWS Import/Export JOBID TTVRP 2646 Rainier Ave South Suite 1060 Seattle, WA 98144 Please note that this address may change depending on what region you are sending your data to. The correct address will always be returned from the Create Job command in the AWS Import/Export Tool. Managing your Import/Export jobs Once your job has been submitted, the only way to get the current status of your job or to modify your job is to run the AWS Import/Export command-line tool. Here is an example of how to list your jobs and how to cancel a job. To get a list of your current jobs, you can run the following command: C:AWSimportexport-webservice-tool>java -jar lib/AWSImportExportWebServiceTool-1.0.jar ListJobs To cancel a job, you can run the following command: C:AWSimportexport-webservice-tool>java -jar lib/AWSImportExportWebServiceTool-1.0.jar CancelJob XHNHC (Move the mouse over the image to enlarge it.)
Read more
  • 0
  • 0
  • 6082

article-image-creating-application-scratch
Packt
13 Nov 2013
5 min read
Save for later

Creating an Application from Scratch

Packt
13 Nov 2013
5 min read
(For more resources related to this topic, see here.) Creating an application Using the Command Line Tool, we are going to create a very new application. This application is also going to be a Sinatra application that displays some basic date and time information. First, navigate to a new directory that will be used to contain the code. Everything in this directory will be uploaded to AppFog when we create the new application. $ mkdir insideaf4 $ cd insideaf4 Now, create a new file called insideaf4.rb. The contents of the file should look like the following: require 'sinatra' get '/' do erb :index end This tells Sinatra to listen for requests to the base URL of / and then render the index page that we will create next. If you are using Ruby 1.8.7, you may need to add the following line at the top: require 'rubygems' Next, create a new directory called views under the insideaf4 directory: $ mkdir views $ cd views Now we are going to create a new file under the views directory called index.erb. This file will be the one that displays the date and time information for our example. The following are the contents of the index.erb file: <html><head> <title>Current Time</title> </head><body> <% time = Time.new %> <h1>Current Time</h1> <table border="1" cellpadding="5"> <tr> <td>Name</td> <td>Value</td> </tr> <tr> <td>Date (M/D/Y)</td> <td<%= time.strftime('%m/%d/%Y') % </tr> <tr> <td>Time</td> <td><%= time.strftime('%I:%M %p') </tr> <tr> <td>Month</td> <td><%= time.strftime('%B') %></t </tr> <tr> <td>Day</td> <td><%= time.strftime('%A') %></td </tr> </table> </body></html> This file will create a table that shows a number of different ways to format the date and time. Embeded in the HTML code are Ruby snippets that look like <%= %>. Inside of these snippets we use Ruby's strftime method to display the current date and time in a number of different string formats. At the beginning of the file, we create a new instance of a Time object which is automatically set to the current time. Then we use the strftime method to display different values in the table. For more information on using Ruby dates, please see the documentation available at http://www.ruby-doc.org/core-2.0.0/Time.html. Testing the application Before creating an application in AppFog, it is useful to test it out locally first. To do this you will again need the Sinatra Gem installed. If you need to do that, refer to Appendix, Installing the AppFog Gem. The following is the command to run your small application: $ ruby indiseaf4.rb You will see the Sinatra application start and then you can navigate to http://localhost:4567/ in a browser. You should see a page that has the current date and time information like the following screenshot: To terminate the application, return to the command line and press Control+C. Publishing to AppFog Now that you have a working application, you can publish it to AppFog and create the new AppFog application. Before you begin, make sure you are in the root director of your project. For this example that was the insideaf4 directory. Next, you will need to log in to AppFog. $ af login Attempting login to [https://api.appfog.com] Email: matt@somecompany.com Password: ******** Successfully logged into [https://api.appfog.com] You may be asked for your e-mail and password again, but the tool may remember your session if you logged in recently. Now you can push your application to AppFog, which will create a new application for you. Make sure you are in the correct directory and use the Push command. You will be prompted for a number of settings during the publishing process. In each case there will be a list of options along with a default. The default value will be listed with a capital letter or listed by itself in square brackets. For our purposes, you can just press Enter for each prompt to accept the default value. The only exception to that is the step that prompts you to choose an infrastructure. In that step you will need to make a selection. $ af push insideaf4 Would you like to deploy from the current directory? [Yn]: Detected a Sinatra Application, is this correct? [Yn]: 1: AWS US East - Virginia 2: AWS EU West - Ireland 3: AWS Asia SE - Singapore 4: HP AZ 2 - Las Vegas Select Infrastructure: Application Deployed URL [insideaf4.aws.af.cm]: Memory reservation (128M, 256M, 512M, 1G, 2G) [128M]: How many instances? [1]: Bind existing services to insideaf4? [yN]: Create services to bind to insideaf4? [yN]: Would you like to save this configuration? [yN]: Creating Application: OK Uploading Application: Checking for available resources: OK Packing application: OK Uploading (1K): OK Push Status: OK Staging Application insideaf4: OK Starting Application insideaf4: OK Summary Hence we have learned how to create application using Appfog. Resources for Article : Further resources on this subject: AppFog Top Features You Need to Know [Article] SSo, what is Node.js? [Article] Learning to add dependencies [Article]
Read more
  • 0
  • 0
  • 5858

article-image-creating-and-managing-vmfs-datastores
Packt
05 Mar 2015
5 min read
Save for later

Creating and Managing VMFS Datastores

Packt
05 Mar 2015
5 min read
In this article by Abhilash G B, author of VMware vSphere 5.5 Cookbook, we will learn how to expand or grow a VMFS datastore with the help of two methods: using the Increase Datastore Capacity wizard and using the ESXi CLI tool vmkfstools. (For more resources related to this topic, see here.) Expanding/growing a VMFS datastore It is likely that you would run out of free space on a VMFS volume over time as you end up deploying more and more VMs on it, especially in a growing environment. Fortunately, accommodating additional free space on a VMFS volume is possible. However, this requires that the LUN either has free space left on it or it has been expanded/resized in the storage array. The procedure to resize/expand the LUN in the storage array differs from vendor to vendor, we assume that the LUN either has free space on it or has already been expanded. The following flowchart provides a high-level overview of the procedure: How to do it... We can expand a VMFS datastore using two methods: Using the Increase Datastore Capacity wizard Using the ESXi CLI tool vmkfstools Before attempting to grow the VMFS datastore, issue a rescan on the HBAs to ensure that the ESXi sees the increased size of the LUN. Also, make note of the NAA ID, LUN number, and the size of the LUN backing the VMFS datastore that you are trying to expand/grow. Using the Increase Datastore Capacity wizard We will go through the following process to expand an existing VMFS datastore using the vSphere Web Client's GUI. Use the vSphere Web Client to connect to vCenter Server. Navigate to Home | Storage. With the data center object selected, navigate to Related Objects | Datastores: Right-click on the datastore you intend to expand and click on Increase Datastore Capacity...:  Select the LUN backing the datastore and click on Next:  Use the Partition Configuration drop-down menu to select the free space left in DS01 to expand the datastore: On the Ready to Complete screen, review the information and click on Finish to expand the datastore: Using the ESXi CLI tool vmkfstools A VMFS volume can also be expanded using the vmkfstools tool. As with the use of any command-line tool, it can sometimes become difficult to remember the process if you are not doing it often enough to know it like the back of your hand. Hence, I have devised the following flowchart to provide an overview of the command-line steps that needs to be taken to expand a VMFS volume: Now that we know what the order of the steps would be from the flowchart, let's delve right into the procedure: Identify the datastore you want to expand using the following command, and make a note of the corresponding NAA ID: esxcli storage vmfs extent list Here, the NAA ID corresponding to the DS01 datastore is naa.6000eb30adde4c1b0000000000000083. Verify if the ESXi sees the new size of the LUN backing the datastore by issuing the following command: esxcli storage core device list -d naa.6000eb30adde4c1b0000000000000083 Get the current partition table information using the following command:Syntax: partedUtil getptbl "Devfs Path of the device" Command: partedUtil getptbl /vmfs/devices/disks/ naa.6000eb30adde4c1b0000000000000083 Calculate the new last sector value. Moving the last sector value closer to the total sector value is necessary in order to use additional space.The formula to calculate the last sector value is as follows: (Total number of sectors) – (Start sector value) = Last sector value So, the last sector value to be used is as follows: (31457280 – 2048) = 31455232 Resize the VMFS partition by issuing the following command:Syntax: partedUtil resize "Devfs Path" PartitionNumber NewStartingSector NewEndingSector Command: partedUtil resize /vmfs/devices/disks/ naa.6000eb30adde4c1b0000000000000083 1 2048 31455232 Issue the following command to grow the VMFS filesystem:Command syntax: vmkfstools –-growfs <Devfs Path: Partition Number> <Same Devfs Path: Partition Number> Command: vmkfstools --growfs /vmfs/devices/disks/ naa.6000eb30adde4c1b0000000000000083:1 /vmfs/devices/disks/ naa.6000eb30adde4c1b0000000000000083:1 Once the command is executed successfully, it will take you back to the root prompt. There is no on-screen output for this command. How it works... Expanding a VMFS datastore refers to the act of increasing its size within its own extent. This is possible only if there is free space available immediately after the extent. The maximum size of a LUN is 64 TB, so the maximum size of a VMFS volume is also 64 TB. The virtual machines hosted on this VMFS datastore can continue to be in the power-on state while this task is being accomplished. Summary This article walks you through the process of creating and managing VMFS datastores. Resources for Article: Further resources on this subject: Introduction Vsphere Distributed Switches? [article] Introduction Vmware Horizon Mirage [article] Backups Vmware View Infrastructure [article]
Read more
  • 0
  • 0
  • 5816
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-windows-azure-service-bus-key-features
Packt
06 Dec 2012
13 min read
Save for later

Windows Azure Service Bus: Key Features

Packt
06 Dec 2012
13 min read
(For more resources related to this topic, see here.) Service Bus The Windows Azure Service Bus provides a hosted, secure, and widely available infrastructure for widespread communication, large-scale event distribution, naming, and service publishing. Service Bus provides connectivity options for Windows Communication Foundation (WCF) and other service endpoints, including REST endpoints, that would otherwise be difficult or impossible to reach. Endpoints can be located behind Network Address Translation (NAT) boundaries, or bound to frequently changing, dynamically assigned IP addresses, or both. Getting started To get started and use the features of Services Bus, you need to make sure you have the Windows Azure SDK installed. Queues Queues in the AppFabric feature (different from Table Storage queues) offer a FIFO message delivery capability. This can be an outcome for those applications that expect messages in a certain order. Just like with ordinary Azure Queues, Service Bus Queues enable the decoupling of your application components and can still function, even if some parts of the application are offline. Some differences between the two types of queues are (for example) that the Service Bus Queues can hold larger messages and can be used in conjunction with Access Control Service. Working with queues To create a queue, go to the Windows Azure portal and select the Service Bus, Access Control & Caching tab. Next, select Service Bus, select the namespace, and click on New Queue. The following screen will appear. If you did not set up a namespace earlier you need to create a namespace before you can create a queue: There are some properties that can be configured during the setup process of a queue. Obviously, the name uniquely identifies the queue in the namespace. Default Message Time To Live configures messages having this default TTL. This can also be set in code and is a TimeSpan value. Duplicate Detection History Time Window implicates how long the message ID (unique) of the received messages will be retained to check for duplicate messages. This property will be ignored if the Required Duplicate Detection option is not set. Keep in mind that a long detection history results in the persistency of message IDs during that period. If you process many messages, the queue size will grow and so does your bill. When a message expires or when the limit of the queue size is reached, it will be deadlettered . This means that they will end up in a different queue named $DeadLetterQueue. Imagine a scenario where a lot of traffic in your queue results in messages in the dead letter queue. Your application should be robust and process these messages as well. The lock duration property defines the duration of the lock when the PeekLock() method is called. The PeekLock() method hides a specific message from other consumers/processors until the lock duration expires. Typically, this value needs to be sufficient to process and delete the message. A sample scenario Remember the differences between the two queue types that Windows Azure offers, where the Service Bus queues are able to guarantee first-in first-out and to support transactions. The scenario is when a user posts a geotopic on the canvas containing text and also uploads a video by using the parallel upload functionality. What should happen next is for the WCF service CreateGeotopic() to post a message in the queue to enter the geotopic, but when the file finishes uploading, there is also a message sent to the queue. These two together should be in a single transaction. Geotopia.Processor processes this message but only if the media file is finished uploading. In this example, you can see how a transaction is handled and how a message can be abandoned and made available on the queue again. If the geotopic is validated as a whole (file is uploaded properly), the worker role will reroute the message to a designated audit trail queue to keep track of actions made by the system and also send to a topic (see next section) dedicated to keeping messages that need to be pushed to possible mobile devices. The messages in this topic will again be processed by a worker role. The reason for choosing a separate worker role is that it creates a role, a loosely-coupled solution, and possible to be fine-grained by only scaling the back-end worker role. See the following diagram for an overview of this scenario: In the previous section, we already created a queue named geotopicaqueue. In order to work with queues, you need the service identity (in this case we use a service identity with a symmetric issuer and the key credentials) of the service namespace. Preparing the project In order to make use of the Service Bus capabilities, you need to add a reference to Microsoft.ServiceBus.dll, located in <drive>:Program FilesMicrosoft SDKsWindows Azure.NET SDK2012-06ref. Next, add the following using statements to your file: using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; Your project is now ready to make use of Service Bus queues. In the configuration settings of the web role project hosting the WCF services, add a new configuration setting named ServiceBusQueue with the following value: "Endpoint=sb://<servicenamespace>.servicebus.windows. net/;SharedSecretIssuer=<issuerName>;SharedSecretValue=<yoursecret>" The properties of the queue you configured in the Windows Azure portal can also be set programmatically. Sending messages Messages that are sent to a Service Bus queue are instances of BrokeredMessage. This class contains standard properties such as TimeToLive and MessageId. An important property is Properties, which is of type IDictionary<string, object>, where you can add additional data. The body of the message can be set in the constructor of BrokerMessage, where the parameter must be of a type decorated with the [Serializable] attribute. The following code snippet shows how to send a message of type BrokerMessage: MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageSender sender = factory.CreateMessageSender("geotopiaqueue"); sender.Send(new BrokeredMessage( new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, MediaFile = MediaFile //Uri of uploaded mediafile })); As the scenario depicts a situation where two messages are expected to be sent in a certain order and to be treated as a single transaction, we need to add some more logic to the code snippet. Right before this message is sent, the media file is uploaded by using the BlobUtil class. Consider sending the media file together with BrokeredMessage if it is small enough. This might be a long-running operation, depending on the size of the file. The asynchronous upload process returns Uri, which is passed to BrokeredMessage. The situation is: A multimedia file is uploaded from the client to Windows Azure Blob storage using a parallel upload (or passed on in the message). A Parallel upload is breaking up the media file in several chunks and uploading them separately by using multithreading. A message is sent to geotopiaqueue, and Geotopia.Processor processes the messages in the queues in a single transaction. Receiving messages On the other side of the Service Bus queue resides our worker role, Geotopia. Processor, which performs the following tasks: It grabs the messages from the queue Sends the message straight to a table in Windows Azure Storage for auditing purposes Creates a geotopic that can be subscribed to The following code snippet shows how to perform these three tasks: MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageReceiver receiver = factory.CreateMessageReceiver("geotopiaqueue "); BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } Cross-domain communication We created a new web role in our Geotopia solution, hosting the WCF services we want to expose. As the client is a Silverlight one (and runs in the browser), we face cross-domain communication. To protect against security vulnerabilities and to prevent cross-site requests from a Silverlight client to some services (without the notice of the user), Silverlight by default allows only site-of-origin communication. A possible exploitation of a web application is cross-site forgery, exploits that can occur when cross-domain communication is allowed; for example, a Silverlight application sending commands to some service running on the Internet somewhere. As we want the Geotopia Silverlight client to access the WCF service running in another domain, we need to explicitly allow cross-domain operations. This can be achieved by adding a file named clientaccesspolicy.xml at the root of the domain where the WCF service is hosted and allowing this cross-domain access. Another option is to add a crossdomain.xml file at the root where the service is hosted. Please go to http://msdn.microsoft.com/en-us/library/cc197955(v=vs.95).aspx to find more details on the cross-domain communication issues. Comparison The following table shows the similarities and differences between Windows Azure and Service Bus queues: Criteria Windows Azure queue Service Bus queue Ordering guarantee No, but based on best effort first-in, first out First-in, first-out Delivery guarantee At least once At most once; use the PeekLock() method to ensure that no messages are missed. PeekLock() together with the Complete() method enable a two-stage receive operation. Transaction support No Yes, by using TransactionScope Receive Mode Peek & Lease Peek & Lock Receive & Delete Lease/Lock duration Between 30 seconds and 7 days Between 60 seconds and 5 minutes Lease/Lock granularity Message level Queue level Batched Receive Yes, by using GetMessages(count) Yes, by using the prefetch property or the use of transactions Scheduled Delivery Yes Yes Automatic dead lettering No Yes In-place update Yes No Duplicate detection No Yes WCF integration No Yes, through WCF bindings WF integration Not standard; needs a customized activity Yes, out-of-the-box activities Message Size Maximum 64 KB Maximum 256 KB Maximum queue size 100 TB, the limits of a storage account 1, 2, 3, 4, or 5 GB; configurable Message TTL Maximum 7 days Unlimited Number of queues Unlimited 10,000 per service namespace Mgmt protocol REST over HTTP(S) REST over HTTP(S) Runtime protocol REST over HTTP(S) REST over HTTP(S) Queue naming rules Maximum of 63 characters Maximum of 260 characters Queue length function Yes, value is approximate Yes, exact value Throughput Maximum of 2,000 messages/second Maximum of 2,000 messages/second Authentication Symmetric key ACS claims Role-based access control No Yes through ACS roles Identity provider federation No Yes Costs $0.01 per 10,000 transactions $ 0.01 per 10,000 transactions Billable operations Every call that touches "storage"' Only Send and Receive operations Storage costs $0.14 per GB per month None ACS transaction costs None, since ACS is not supported $1.99 per 100,000 token requests Background information There are some additional characteristics of Service Bus queues that need your attention: In order to guarantee the FIFO mechanism, you need to use messaging sessions. Using Receive & Delete in Service Bus queues reduces transaction costs, since it is counted as one. The maximum size of a Base64-encoded message on the Window Azure queue is 48 KB and for standard encoding it is 64 KB. Sending messages to a Service Bus queue that has reached its limit will throw an exception that needs to be caught. When the throughput has reached its limit, the HTTP 503 error response is returned from the Windows Azure queue service. Implement retrying logic to tackle this issue. Throttled requests (thus being rejected) are not billable. ACS transactions are based on instances of the message factory class. The received token will expire after 20 minutes, meaning that you will only need three tokens per hour of execution. Topics and subscriptions Topics and subscriptions can be useful in a scenario where (instead of a single consumer, in the case of queues) multiple consumers are part of the pattern. Imagine in our scenario where users want to be subscribed to topics posted by friends. In such a scenario, a subscription is created on a topic and the worker role processes it; for example, mobile clients can be push notified by the worker role. Sending messages to a topic works in a similar way as sending messages to a Service Bus queue. Preparing the project In the Windows Azure portal, go to the Service Bus, Access Control & Caching tab. Select Topics and create a new topic, as shown in the following screenshot: Next, click on OK and a new topic is created for you. The next thing you need to do is to create a subscription on this topic. To do this, select New Subscription and create a new subscription, as shown in the following screenshot: Using filters Topics and subscriptions, by default, it is a push/subscribe mechanism where messages are made available to registered subscriptions. To actively influence the subscription (and subscribe only to those messages that are of your interest), you can create subscription filters. SqlFilter can be passed as a parameter to the CreateSubscription method of the NamespaceManager class. SqlFilter operates on the properties of the messages so we need to extend the method. In our scenario, we are only interested in messages that are concerning a certain subject. The way to achieve this is shown in the following code snippet: BrokeredMessage message = new BrokeredMessage(new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, mediaFile = fileContent }); //used for topics & subscriptions message.Properties["subject"] = subject; The preceding piece of code extends BrokeredMessage with a subject property that can be used in SqlFilter. A filter can only be applied in code on the subscription and not in the Windows Azure portal. This is fine, because in Geotopia, users must be able to subscribe to interesting topics, and for every topic that does not exist yet, a new subscription is made and processed by the worker role, the processor. The worker role contains the following code snippet in one of its threads: Uri uri = ServiceBusEnvironment.CreateServiceUri ("sb", "<yournamespace>", string.Empty); string name = "owner"; string key = "<yourkey>"; //get some credentials TokenProvider tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(name, key); // Create namespace client NamespaceManager namespaceClient = new NamespaceManager(ServiceBusEnvironment.CreateServiceUri ("sb", "geotopiaservicebus", string.Empty), tokenProvider); MessagingFactory factory = MessagingFactory.Create(uri, tokenProvider); BrokeredMessage message = new BrokeredMessage(); message.Properties["subject"] = "interestingsubject"; MessageSender sender = factory.CreateMessageSender("dataqueue"); sender.Send(message); //message is send to topic SubscriptionDescription subDesc = namespaceClient.CreateSubscription("geotopiatopic", "SubscriptionOnMe", new SqlFilter("subject='interestingsubject'")); //the processing loop while(true) { MessageReceiver receiver = factory.CreateMessageReceiver ("geotopiatopic/subscriptions/SubscriptionOnMe"); //it now only gets messages containing the property 'subject' //with the value 'interestingsubject' BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } } Windows Azure Caching Windows Azure offers caching capabilities out of the box. Caching is fast, because it is built as an in-memory (fast), distributed (running on different servers) technology. Windows Azure Caching offers two types of cache: Caching deployed on a role Shared caching When you decide to host caching on your Windows Azure roles, you need to pick from two deployment alternatives. The first is dedicated caching, where a worker role is fully dedicated to run as a caching store and its memory is used for caching. The second option is to create a co-located topology, meaning that a certain percentage of available memory in your roles is assigned and reserved to be used for in-memory caching purposes. Keep in mind that the second option is the most costeffective one, as you don't have a role running just for its memory. Shared caching is the central caching repository managed by the platform which is accessible for your hosted services. You need to register the shared caching mechanism on the portal in the Service Bus, Access Control & Caching section of the portal. You need to configure a namespace and the size of the cache (remember, there is money involved). This caching facility is a shared one and runs inside a multitenant environment.
Read more
  • 0
  • 0
  • 5661

article-image-implementing-openstack-networking-and-security
Packt
05 Feb 2016
8 min read
Save for later

Implementing OpenStack Networking and Security

Packt
05 Feb 2016
8 min read
In this article written by Omar Khedher, author of Mastering OpenStack, we will explore the various aspects of networking and security in OpenStack. A major part of the article is focused on presenting the different security layouts by using Neutron. In this article, we will discuss the following topics: Understanding how Neutron facilitates the network management in OpenStack Using security groups to enforce a security layer for instances The story of an API By analogy, the OpenStack compute service provides an API that provides a virtual server abstraction to imitate the compute resources. The network service and compute service perform in the same way, where we come to a new generation of virtualization in network resources such as network, subnet, and ports, and can be continued in the following schema: Network: As an abstraction for the layer 2 network segmentation that is similar to the VLANs Subnet: This is the associated abstraction layer for a block of IPv4/IPv6 addressing per network Port: This is the associated abstraction layer that is used to attach a virtual NIC of an instance to a network Router: This is an abstraction for layer 3 that is used to perform routing between the networks Floating IP: This is used to perform static public IP mapping from external to internal networks Security groups Imagine a scenario where you have to apply certain traffic management rules for a dozen compute node instances. Therefore, assigning a certain set of rules for a specific group of nodes will be much easier instead of going through each node at a time. Security groups enclose all the aspects of the rules that are applied to the ingoing and outgoing traffic to instances, which includes the following: The source and receiver, which will allow or deny traffic to instances from either the internal OpenStack IP addresses or from the rest of the world Protocols to which the rule will apply, such as TCP, UDP, and ICMP Egress/ingress traffic management to a neutron port In this way, OpenStack offers an additional security layer to the firewall rules that are available on the compute instance. The purpose is to manage traffic to several compute instances from one security group. You should bear in mind that the networking security groups are more granular-traffic-filtering-aware than the compute firewall rules since they are applied on the basis of the port instead of the instance. Eventually, the creation of the network security rules can be done in different ways. For more information on how iptables works on Linux, https://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-iptables.html is a very useful reference. Manage the security groups using Horizon From Horizon, in the Access and Security section, you can add a security group and name it, for example PacktPub_SG. Then, a simple click on Edit Rules will do the trick. The following example illustrates how this network security function can help you understand how traffic—both in ingress/egress—can be controlled: The previous security group contains four rules. The first and the second lines are rules to open all the outgoing traffic for IPv4 and IPv6 respectively. The third line allows the inbound traffic by opening the ICMP port, while the last one opens port 22 for SSH for the inbound interface. You might notice the presence of the CIDR fields, which is essential to know. Based on CIDR, you allow or restrict traffic over the specified port. For example, using CIDR of 0.0.0.0/0 will allow traffic for all the IP addresses over the port that was mentioned in your rule. For example, a CIDR with 32.32.15.5/32 will restrict traffic only to a single host with an IP of 32.32.15.5. If you would like to specify a range of IP in the same subnet, you can use the CIDR notation, 32.32.15.1/24, which will restrict traffic to the IP addresses starting from 32.32.15.*; the other IP addresses will not stick to the latter rule. The naming of the security group must be done with a unique name per project. Manage the security groups using the Neutron CLI The security groups also can be managed by using the Python Neutron command-line interface. Wherever you run the Neutron daemon, you can list, for example, all the present security groups from the command line in the following way: # neutron security-group-list The preceding command yields the following output: To demonstrate how the PacktPub_SG security group rules that were illustrated previously are implemented on the host, we can add a new rule that allows the ingress connections to ping (ICMP) and establish a secure shell connection (SSH) in the following way: # neutron security-group-rule-create --protocol icmp –-direction ingress PacktPub-SG The preceding command produces the following result: The following command line will add a new rule that allows ingress connections to establish a secure shell connection (SSH): # neutron security-group-rule-create --protocol tcp –-port-range-max 22 –-direction ingress PacktPub-SG The preceding command gives the following output: By default, if none of the security groups have been created, the port of instances will be associated within the default security group for any project where all the outbound traffic will be allowed and blocked in the inbound side. You may conclude from the output of the previous command line that it lists the rules that are associated with the current project ID and not by the security groups. Managing the security groups using the Nova CLI The nova command line also does the same trick if you intend to perform the basic security group's control, as follows: $ nova secgroup-list-rules default Since we are setting Neutron as our network service controller, we will proceed by using the networking security groups, which reveals additional traffic control features. If you are still using the compute API to manage the security groups, you can always refer to the nova.conf file for each compute node to set security_group_api = neutron. To associate the security groups to certain running instances, it might possible to use the nova client in the following way: # nova add-secgroup test-vm1 PacktPub_SG The following code illustrates the new association of the packtPub_SG security group with the test-vm1 instance: # nova show test-vm1   The following is the result of the preceding command: One of the best practices to troubleshoot connection issues for the running instances is to start checking the iptables running in the compute node. Eventually, any rule that was added to a security group will be applied to the iptables chains in the compute node. We can check the updated iptables chains in the compute host after applying the security group rules by using the following command: # iptables-save The preceding command yields the following output: The highlighted rules describe the direction of the packet and the rule that is matched. For example, the inbound traffic to the f7fabcce-f interface will be processed by the neutron-openvswi-if7fabcce-f chain. It is important to know how iptables rules work in Linux. Updating the security groups will also perform changes in the iptable chains. Remember that chains are a set of rules that determine how packets should be filtered. Network packets traverse rules in chains, and it is possible to jump to another chain. You can find different chains per compute host, depending on the network filter setup. If you have already created your own security groups, a series of iptables and chains are implemented on every compute node that hosts the instance that is associated within the applied corresponding security group. The following example demonstrates a sample update in the current iptables of a compute node that runs instances within the 10.10.10.0/24 subnet and assigns 10.10.10.2 as a default gateway for the former instances IP ranges: The last rule that was shown in the preceding screenshot dictates how the flow of the traffic leaving the f7fabcce-finterface must be sourced from 10.10.10.2/32 and the FA:16:3E:7E:79:64 MAC address. The former rule is useful when you wish to prevent an instance from issuing a MAC/IP address spoofing. It is possible to test ping and SSH to the instance via the router namespace in the following way: # ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 ping 10.10.10.2 The preceding command provides the following output: The testing of an SSH to the instance can be done by using the sane router namespace, as follows: # ip netns exec router qrouter-5abdeaf9-fbb6-4a3f-bed2-7f93e91bb904 ssh cirros@10.10.10.2 The preceding command produces the following output: Web servers DMZ example In the current example, we will show a simple setup of a security group that might be applied to a pool of web servers that are running in the Compute01, Compute02 and Compute03 nodes. We will allow inbound traffic from the Internet to access WebServer01, AppServer01, and DNSServer01 over HTTP and HTTPS. This is depicted in the following diagram: Let's see how we can restrict the traffic ingress/egress via Neutron API: $ neutron security-group-create DMZ_Zone --description "allow web traffic from the Internet" $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 443 --port_range_max 443 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 $neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 3306 --port_range_max 53 DMZ_Zone --remote-ip-prefix 0.0.0.0/0 From Horizon, we can see the following security rules group added: To conclude, we have looked at presenting different security layouts by using Neutron. At this point, you should be comfortable with security groups and their use cases. Further your OpenStack knowledge by designing, deploying, and managing a scalable OpenStack infrastructure with Mastering OpenStack
Read more
  • 0
  • 0
  • 5197

article-image-building-mobile-apps
Packt
21 Apr 2014
6 min read
Save for later

Building Mobile Apps

Packt
21 Apr 2014
6 min read
(For more resources related to this topic, see here.) As mobile apps get closer to becoming the de-facto channel to do business on the move, more and more software vendors are providing easy to use mobile app development platforms for developers to build powerful HTML5, CSS, and JavaScript based apps. Most of these mobile app development platforms provide the ability to build native, web, and hybrid apps. There are several very feature rich and popular mobile app development toolkits available in the market today. Some of them worth mentioning are: Appery (http://appery.io) AppBuilder (http://www.telerik.com/appbuilder) Phonegap (http://phonegap.com/) Appmachine (http://www.appmachine.com/) AppMakr (http://www.appmakr.com/) (AppMakr is currently not starting new apps on their existing legacy platform. Any customers with existing apps still have full access to the editor and their apps.) Codiqa (https://codiqa.com) Conduit (http://mobile.conduit.com/) Apache Cordova (http://cordova.apache.org/) And there are more. The list is only a partial list of the amazing tools currently in the market for building and deploying mobile apps quickly. The Heroku platform integrates with the Appery.io (http://appery.io) mobile app development platform to provide a seamless app development experience. With the Appery.io mobile app development platform, the process of developing a mobile app is very straightforward. You build the user interface (UI) of your app using drag-and-drop from an available palette. The palette contains a rich set of user interface controls. Create the navigation flow between the different screens of the app, and link the actions to be taken when certain events such as clicking a button. Voila! You are done. You save the app and test it there using the Test menu option. Once you are done with testing the app, you can host the app using Appery's own hosting service or the Heroku hosting service. Mobile app development was never this easy. Introducing Appery.io Appery.io (http://www.appery.io) is a cloud-based mobile app development platform. With Appery.io, you can build powerful mobile apps leveraging the easy to use drag-and-drop tools combined with the ability to use client side JavaScript to provide custom functionality. Appery.io enables you to create real world mobile apps using built-in support for backend data stores, push notifications, server code besides plugins to consume third-party REST APIs and help you integrate with a plethora of external services. Appery.io is an innovative and intuitive way to develop mobile apps for any device, be it iOS, Windows or Android. Appery.io takes enterprise level data integration to the next level by exposing your enterprise data to mobile apps in a secure and straightforward way. It uses Exadel's (Appery.io's parent company) RESTExpress API to enable sharing your enterprise data with mobile apps through a REST interface. Appery.io's mobile UI development framework provides a rich toolkit to design the UI using many types of visual components required for the mobile apps including google maps, Vimeo and Youtube integration. You can build really powerful mobile apps and deploy them effortlessly using drag and drop functionality inside the Appery.io app builder. What is of particular interest to Heroku developers is Appery.io's integration with mobile backend services with option to deploy on the Heroku platform with the click of a button. This is a powerful feature where in you do not need to install any software on your local machine and can build and deploy real world mobile apps on cloud based platforms such as Appery.io and Heroku r. In this section, we create a simple mobile app and deploy it on Heroku. In doing so, we will also learn: How to create a mobile UI form How to configure your backend services (REST or database) How to integrate your UI with backend services How to deploy the app to Heroku How to test your mobile app We will also review the salient features of the Appery.io mobile app development platform and focus on the ease of development of these apps and how one could easily leverage web services to deploy apps and consume data from any system. Getting Appery.io The cool thing about Appery.io is that it is a cloud-based mobile app development toolkit and can be accessed from any popular web browser. To get started, create an account at http://appery.io and you are all set. You can sign up for a basic starter version which provides the ability to develop 1 app per account and go all the way up to the paid Premium and Enterprise grade subscriptions. Introducing the Appery.io app builder The Appery.io app builder is a cloud based mobile application development kit that can be used to build mobile apps for any platform. The Appery.io app builder consists of intuitive tooling and a rich controls palette to help developers drag and drop controls on to the device and design robust mobile apps. The Appery.io app builder has the following sections: Device layout section: This section contains the mock layout of the device onto which the developer can drag-and-drop visual controls to create a user interface. Palette: Contains a rich list of visual controls like buttons, text boxes, Google Map controls and more that developers can use to build the user experience. Project explorer: This section consists of many things including project files, application level settings/configuration, available themes for the device, custom components, available CSS and JavaScript code, templates, pop-ups and one of the key elements— the available backend services. Key menu options: Save and Test for the app being designed. Page properties: This section consists of the design level properties for the page being designed. Modifying these properties changes the user interface labels or the underlying properties of the page elements. Events: This is another very important section of the app builder that contains the event to action association for the various elements of the page. For example, it can contain the action to be taken when a click event happens on a button on this page. The following Appery.io app builder screenshot highlights the various sections of the rich toolkit available for mobile app developers to build apps quickly: Creating your first Appery.io mobile app Building a mobile app is quite straightforward using Appery.io's feature rich app development platform. To get started, create an Appery.io account at http://appery.io and login using valid credentials: Click on the Create new app link on the left section of your Appery.io account page: Enter the new app name for example, herokumobile app and click on Create: Enter the name of the first/launch page of your mobile app and click on Create page: This creates the new Appery.io app and points the user to the Appery.io app builder to design the start page of the new mobile app.
Read more
  • 0
  • 0
  • 5178

article-image-disaster-recovery-hyper-v
Packt
29 Jan 2013
9 min read
Save for later

Disaster Recovery for Hyper-V

Packt
29 Jan 2013
9 min read
(For more resources related to this topic, see here.) Hyper-V and Windows Server 2012 come with tools and solutions to make sure that your virtual machines will be up, running, and highly available. Components such as Failover Cluster can ensure that your servers are accessible, even in case of failures. However, disasters can occur and bring all the servers and services offline. Natural disasters, viruses, data corruption, human errors, and many other factors can make your entire system unavailable. People think that High Available (HA) is a solution for Disaster Recovery (DR) and that they can use it to replace DR. Actually HA is a component of a DR plan, which consists of process, policies, procedures, backup and recovery plan, documentation, tests, Service Level Agreements (SLA), best practices, and so on. The objective of a DR is simply to have business continuity in case of any disaster. In a Hyper-V environment, we have options to utilize the core components, such as Hyper-V Replica, for a DR plan, which replicates your virtual machines to another host or cluster and makes them available if the first host is offline, or even backs up and restores to bring VMs back, in case you lose everything. This module will walk you through the most important processes for setting up disaster recovery for your virtual machines running on Hyper-V. Backing up Hyper-V and virtual machines using Windows Server Backup Previous versions of Hyper-V had complications and incompatibilities with the built-in backup tool, forcing the administrators to acquire other solutions for backing up and restoring. Windows Server 2012 comes with a tool known as Windows Server Backup (WSB), which has full Hyper-V integration, allowing you to back up and restore your server, applications, Hyper-V, and virtual machines. WSB is easy and provides for a low cost scenario for small and medium companies. This recipe will guide you through the steps to back up your virtual machines using the Windows Server Backup tool. Getting ready Windows Server Backup does not support tapes. Make sure that you have a disk, external storage, network share, and free space to back up your virtual machines before you start. How to do it... The following steps will show you how to install the Windows Server Backup feature and how to schedule a task to back up your Hyper-V settings and virtual machines: To install the Windows Server Backup feature, open Server Manager from the taskbar. In the Server Manager Dashboard, click on Manage and select Add Roles and Features. On the Before you begin page, click on Next four times. Under the Add Roles and Features Wizard, select Windows Server Backup from the Features section, as shown in the following screenshot: Click on Next and then click on Install. Wait for the installation to be completed. After the installation, open the Start menu and type wbadmin.msc to open the Windows Server Backup tool. To change the backup performance options, click on Configure Performance from the pane on the right-hand side in the Windows Server Backup console. In the Optimize Backup Performance window, we have three options to select from—Normal backup performance, Faster backup performance, and Custom, as shown in the following screenshot: In the Windows Server Backup console, in the pane on the right-hand side, select the backup that you want to perform. The two available options are Backup Schedule to schedule an automatic backup and Backup Once for a single backup. The next steps will show how to schedule an automatic backup. In the Backup Schedule Wizard, in the Getting Started page, click on Next. In the Select Backup Configuration page, select Full Server to back up all the server data or click on Custom to select specific items to back up. If you want to backup only Hyper-V and virtual machines, click on Custom and then Next. In Select Items for Backup, click on Add Items. In the Select Items window, select Hyper-V to back up all the virtual machines and the host component, as shown in the following screenshot. You can also expand Hyper-V and select the virtual machines that you want to back up. When finished, click on OK. Back to the Select Items for Backup, click on Advanced Settings to change Exclusions and VSS Settings. In the Advanced Settings window, in the Exclusions tab, click on Add Exclusion to add any necessary exclusions. Click on the VSS Settings tab to select either VSS full Backup or VSS copy Backup as shown in the following screenshot. Click on OK. In the Select Items for Backup window, confirm the items that will be backed up and click on Next. In the Specify Backup Time page, select Once a day and the time for a daily backup or select More than once a day and the time and click on Next. In the Specify Destination Type page, select the option Back up to a hard disk that is dedicated for backups (recommended), back up to a volume, or back up to a shared network folder, as shown in the following screenshot and click on Next. If you select the first option, the disk you choose will be formatted and dedicated to storing the backup data only. In Select Destination Disk, click on Show All Available Disks to list the disks, select the one you want to use to store your backup, and click on OK. Click on Next twice. If you have selected the Back up to a hard disk that is dedicated for backups (recommended) option, you will see a warning message saying that the disk will be formatted. Click on Yes to confirm. In the Confirmation window, double-check the options you selected and click on Finish, as shown in the following screenshot: After that, the schedule will be created. Wait until the scheduled time to begin and check whether the backup has been finished successfully. How it works... Many Windows administrators used to miss the NTBackup tool from the old Windows Server 2003 times because of its capabilities and features. The Windows Server Backup tool, introduced in Windows Server 2008, has many limitations such as no tape support, no advanced schedule options, fewer backup options, and so on. When we talk about Hyper-V in this regards, the problem is even worse. Windows Server 2008 has minimal support and features for it. In Windows Server 2012, the same tool is available with some limitations; however, it provides at least the core components to back up, schedule, and restore Hyper-V and your virtual machines. By default, WSB is not installed. The feature installation is made by Server Manager. After its installation, the tool can be accessed via console or command lines. Before you start the backup of your servers, it is good to configure the backup performance options you want to use. By default, all the backups are created as normal. It creates a full backup of all the selected data. This is an interesting option when low amounts of data are backed up. You can also select the Faster backup performance option. This backs up the changes between the last and the current backup, increasing the backup time and decreasing the stored data. This is a good option to save storage space and backup time for large amounts of data. A backup schedule can be created to automate your backup operations. In the Backup Schedule Wizard, you can back up your entire server or a custom selection of volumes, applications, or files. For backing up Hyper-V and its virtual machines, the best option is the customized backup, so that you don't have to back up the whole physical server. When Hyper-V is present on the host, the system shows Hyper-V, and you will be able to select all the virtual machines and the host component configuration to be backed up. During the wizard, you can also change the advanced options such as exclusions and Volume Shadow Copy Services (VSS) settings. WSB has two VSS backup options—VSS full backup and VSS copy backup. When you opt for VSS full backup, everything is backed up and after that, the application may truncate log files. If you are using other backup solutions that integrate with WSB, these logs are essential to be used in future backups such as incremental ones. To preserve the log files you can use VSS copy backup so that other applications will not have problems with the incremental backups. After selecting the items for backup, you have to select the backup time. This is another limitation from the previous version—only two schedule options, namely Once a day or More than once a day. If you prefer to create different backup schedule such as weekly backups, you can use the WSB commandlets in PowerShell. Moving forward, in the backup destination type, you can select between a dedicated hard disk, a volume, or a network folder to save your backups in. When confirming all the items, the backup schedule will be ready to back up your system. You can also use the option Backup once to create a single backup of your system. There's more... To check whether previous backups were successful or not, you can use the details option in the WSB console. These details can be used as logs to get more information about the last (previous), next, and all the backups. To access these logs, open Windows Server Backup, under Status select View details. The following screenshot shows an example of the Last backup. To see which files where backed up, click on the View list of all backed up files link. Checking the Windows Server Backup commandlets Some options such as advanced schedule, policies, jobs, and other configurations can only be created through commandlets on PowerShell. To see all the available Windows Server Backup commandlets, type the following command: Get-Command –Module WindowsServerBackup See also The Restoring Hyper-V and virtual machines using Windows Server Backup recipe in this article
Read more
  • 0
  • 0
  • 5082
article-image-how-to-run-hadoop-on-google-cloud-2
Robi Sen
15 Dec 2014
7 min read
Save for later

How to Run Hadoop on Google Cloud – Part 2

Robi Sen
15 Dec 2014
7 min read
Setting up and working with Hadoop can sometimes be difficult. Furthermore, most people with limited resources develop on Hadoop instances on Virtual Machines locally or on minimal hardware. The problem with this is that Hadoop is really designed to run on many machines in order to realize its full capabilities. In this two part series of posts (read part 1 here), we will show you how you can quickly get started with Hadoop in the cloud with Google services. In the last part in this series, we installed our Google developer account. Now it is time to install the Google Cloud SDK. Installing the Google Cloud SDK To work with the Google Cloud SDK, we need a Cygwin 32-bit version. Get it here, even if you have a 64-bit processor. The reason for this is that the Python 64-bit version for Windows has issues that make it incompatible with many common Python tools. So you should stick with the 32-bit version. Next, when you install Cygwin, you need to make sure you select Python (note that if you do not install the Cygwin version of Python, your installation will fail), openssh, and curl. You can do this when you get to the package screen by typing openssh or curl in the search bar at top and selecting the package under "net," then by selecting the check box under "Bin" for openssh. Do the same for curl. You should see something like what is shown in Figures 1 and 2 respectively. Figure 1: Adding openssh   Figure 2: Adding curl to Cygwin Now go ahead and start Cygwin by going to Start -> All Programs -> Cygwin -> Cygwin Terminal. Now use curl to install the Google Cloud SDK by typing the following command “$ curl https://sdk.cloud.google.com | bash,” which will install the Google Cloud SDK from the Internet. Follow the prompts to complete the setup. When prompted, if you would like to update your system path, select "y" and when complete, restart Cygwin. After you restart Cygwin, you need to authenticate with the Google Cloud SDK. To do this type "gcloud auth login –no-launch-browser" like in Figure 3.   Figure 3: Authenticating with Google Cloud SDK tools Cloud SDK will then give you a URL that you should copy and paste in your browser. You will then be asked to log in with your Google account and accept the permissions requested by the SDK as in Figure 4.   Figure 4: Google Cloud authorization via OAuth Google will provide you with a verification code that you can cut and paste into the command line and if everything works, you should be logged in. Next, set your project ID for this session by using the command "$ gcloud config set project YOURPROJECTID" as in Figure 5.   Figure 5: Setting your project ID Now you need to download the set of scripts that will help you set up Hadoop in Google Cloud Storage.[1] Make sure you do not close this command-line window because we are going to use it again. Download the Big Data utilities scripts to set up Hadoop in the Cloud here. Once you have downloaded the zip, unpack it and place it in the directory wherever you want. Now, in the command line, type "gsutil mb -p YOURPROJECTID gs://SOMEBUCKETNAME." If everything goes well, you should see something like Figure 6. Figure 6: Creating your Google Cloud Storage bucker YOURPROJECTID is the project ID you created or were assigned earlier and SOMEBUCKETNAME is whatever you want your bucket to be called. Unfortunately, bucket names must be unique. Read more here, so using something like your company domain name and some other unique identifier might be a good idea. If you do not pick a unique name, you will get an error. Now go to the directory where you stored your Big Data Utility Scripts and open bdutil_env.sh in a text editor as in Figure 7.   Figure 7: Editing the bdutil_env.sh file Now add your bucket name for the CONFIGBUCKET  value in the file and your project ID for the PROJECT value like in Figure 8. Now save the file. Figure 8: Editing the bdutil_env.sh file Once you have the bdutil_env.sh file, you need to test that you can reach your compute instances via gcutil and ssh. Let’s walk through that now to set it up so you can do it in the future. In Cygwin, create a test instance to play with and set up gcutil by typing the command "gcutil addinstance mytest," then hit Enter. You will be asked to select a time zone (I selected 6), a number of processors, and the like. Go ahead and select the items you want since after we create this instance and connect to it, we will delete it. After you walk through the setup steps, Google will create your instance. During the creation, you will be asked for a passphrase. Make sure you use a passphrase you can remember. Now, in the command line, type "gcutil ssh mytest." This will now try to connect to your "mytest" instance via SSH, and if it’s the first time you have done this, you will be asked to type in a passphrase. Do not type a passphrase; just leave it blank and select Enter. This will then create a public and private ssh key. If everything works, you should now connect to the instance and you will know gcutil ssh is working correct. Go ahead and type "exit" and then "gcutil deleteinstance mytest" and select "y" for all questions. This will trigger the Google Cloud to destroy your test instance. Now in Cygwin, navigate to where you placed the dbutils download. If you are not familiar with Cygwin, you can navigate to any directory on the c drive by using the "cygdrive/c" and then set the Unix style path to your directory. So, for example, on my computer it would look like Figure 9. Figure 9: Navigating to the dbutils folder in Cygwin Now we can attempt a deployment of Haddop by typing "./bdutil deploy" like in Figure 10. Figure 10: Deploying Hadoop The system will now try to deploy your Hadoop instance to the Cloud. You might be prompted to create a staging directory as well while the script is running. Go ahead and type "y" to accept. You should now see a message saying "Deployment complete." It might take several minutes for your job to complete, so be patient. When it is finished, check to see whether your cluster is up by typing in "gcutil listinstances", where you will see something like what is shown in Figure 11. Figure 11: A list of Hadoop instances running From here, you need to test your deployment, which you do via the command "gcutil ssh –project=YOURPROJECTID hs-ghfs-nn < Hadoop-validate-setup.sh" like in Figure 12. Figure 12: Validating Hadoop deployment If the script runs successfully, you should see an output like "teragen, terasort, teravalidate passed." From there, go ahead and delete the project by typing "./bdutil delete." This will delete the deployed virtual machines (VMs) and associated artifacts. When it’s done, you should see message "Done deleting VMs!" Summary In this two part blog post series, you learned how to use the Google Cloud SDK to set up Hadoop via Windows and Cygwin. Now you have Cygwin set up and configured to build, connect to the Google Cloud, set up instances, and deploy Hadoop. If you want even more Hadoop content, visit our Hadoop page. Featuring our latest releases and our top free Hadoop content, it's the centre of Packt's Big Data coverage. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as Under Armour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 4500

article-image-using-openstack-swift
Packt
13 May 2014
4 min read
Save for later

Using OpenStack Swift

Packt
13 May 2014
4 min read
(For more resources related to this topic, see here.) Installing the clients This section talks about installing the cURL command line tool. cURL – It is a command line tool which can be used to transfer data using various protocols. We install cURL using the following command $ apt-get install curl OpenStack Swift Client CLI – This tool is installed by the following command. $ apt-get install python-swiftclient REST API Client – To access OpenStack Swift services via REST API, we can use third party tools like Fiddler web debugger which supports REST architecture. Creating Token by using Authentication The first step in order to access containers or objects is to authenticate the user by sending a request to the authentication service and get a valid token that can then be used in subsequent commands to perform various operations as follows: curl -X POST -i https://auth.lts2.evault.com/v2.0/Tokens -H 'Content-type: application/json' -d '{"auth":{"passwordCredentials":{"username":"user","password":"password"},"tenantName":"admin"}}' The token that is generated is given below. It has been truncated for better readability. token = MIIGIwYJKoZIhvcNAQcCoIIGFDCCBhACAQExCTAHBgUrDgMCGjC CBHkGCSqGSIb3DQEHAaCCBGoEggRme…yJhY2Nlc3MiOiB7InRva2VuIjoge yJpc3N1ZWRfYXQiOiAiMjAxMy0xMS0yNlQwNjoxODo0Mi4zNTA0NTciLCU+ KNYN20G7KJO05bXbbpSAWw+5Vfl8zl6JqAKKWENTrlKBvsFzO-peLBwcKZX TpfJkJxqK7Vpzc-NIygSwPWjODs--0WTes+CyoRD EVault LTS2 authentication The EVault LTS2 OpenStack Swift cluster provides its own private authentication service which returns back the token. This generated token will be passed as the token parameter in subsequent commands. Displaying meta-data information for Account, Container, Object This section describes how we can obtain information about the account, container or object. Using OpenStack Swift Client CLI The OpenStack Swift client CLI stat command is used to get information about the account, container or object. The name of the container should be provided after the stat command for getting container information. The name of the container and object should be provided after the stat command for getting object information. Make the following request to display the account status. # swift --os-auth-token=token --os-storage-url=https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b stat Where token is the generated token as described in the previous section and 26cef4782cca4e5aabbb9497b8c1ee1b is the account name. The response shows the information about the account. Account: 26cef4782cca4e5aabbb9497b8c1ee1b Containers: 2 Objects: 6 Bytes: 17 Accept-Ranges: bytes Server: nginx/1.4.1 Using cURL The following shows how to obtain the same information using cURL. It shows that the account contains 2 containers and 1243 objects. Make the following request: curl -X HEAD -ihttps://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b -H 'X-Auth-Token: token' -H 'Content-type: application/json' The response is as follows: HTTP/1.1 204 No Content Server: nginx/1.4.1 Date: Wed, 04 Dec 2013 06:53:13 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 0 X-Account-Bytes-Used: 3439364822 X-Account-Container-Count: 2 X-Account-Object-Count: 6 Using REST API The same information can be obtained using the following REST API method. Make the following request: Method : HEAD URL: https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1bHeader : X-Auth-Token: token Data : No data The response is as follows:HTTP/1.1 204 No Content Server: nginx/1.4.1 Date: Wed, 04 Dec 2013 06:47:17 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 0 X-Account-Bytes-Used: 3439364822 X-Account-Container-Count: 2 X-Account-Object-Count: 6 Listing Containers This section describes how to obtain information about the containers present in an account. Using OpenStack Swift Client CLI Make the following request: swift --os-auth-token=token --os-storage-url= https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b list The response is as follows: cities countries Using cURL The following shows how to obtain the same information using cURL. It shows that the account contains 2 containers and 1243 objects. Make the following request: curl -X GET –i https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b -H 'X-Auth_token: token' The response is as follows: HTTP/1.1 200 OK X-Account-Container-Count: 2 X-Account-Object-Count: 6 cities countries Using REST API Make the following request: Method : GET URL : https://storage.lts2.evault.com/v1/26cef4782cca4e5aabbb9497b8c1ee1b Header : X-Auth-Token: token Data : No data The response is as follows: X-Account-Container-Count: 2 X-Account-Object-Count: 6 cities countries Summary This article has thus explained various mechanisms that are available to access OpenStack Swift and how by using these mechanisms we will be able to authenticate accounts and list containers. Resources for Article: Further resources on this subject: Securing vCloud Using the vCloud Networking and Security App Firewall [Article] Introduction to Cloud Computing with Microsoft Azure [Article] Troubleshooting in OpenStack Cloud Computing [Article]
Read more
  • 0
  • 0
  • 4479

article-image-custom-components-in-visualforce
Packt
08 Oct 2013
14 min read
Save for later

Custom Components in Visualforce

Packt
08 Oct 2013
14 min read
(For more resources related to this topic, see here.) Custom components allow custom Visualforce functionality to be encapsulated as discrete modules, which provides two main benefits: Functional decomposition, where a lengthy page is broken down into custom components to make it easier to develop and maintain Code re-use, where a custom component provides common functionality that can be re-used across a number of pages A custom component may have a controller, but unlike Visualforce pages, only custom controllers may be used. A custom component can also take attributes, which can influence the generated markup or set property values in the component's controller. Custom components do not have any associated security settings; a user with access to a Visualforce page has access to all custom components referenced by the page. Passing attributes to components Visualforce pages can pass parameters to components via attributes. A component declares the attributes that it is able to accept, including information about the type and whether the attribute is mandatory or optional. Attributes can be used directly in the component or assigned to properties in the component's controller. In this recipe we will create a Visualforce page that provides contact edit capability. The page utilizes a custom component that allows the name fields of the contact, Salutation, First Name , and Last Name, to be edited in a three-column page block section. The contact record is passed from the page to the component as an attribute, allowing the component to be re-used in any page that allows editing of contacts. How to do it… This recipe does not require any Apex controllers, so we can start with the custom component. Navigate to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter ContactNameEdit in the Label field. Accept the default ContactNameEdit that is automatically generated for the Name field. Paste the contents of the ContactNameEdit.component file from the code download into the Visualforce Markup area and click on the Save button. Once a custom component is saved, it is available in your organization's component library, which can be accessed from the development footer of any Visualforce page. For more information visit http://www.salesforce.com/us/developer/docs/pages/Content/pages_quick_start_component_library.htm. Next, create the Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter ContactEdit in the Label field. Accept the default Contact Edit that is automatically generated for the Name field. Paste the contents of the ContactEdit.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the Contact Edit page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the ContactEdit page: https://<instance>/apex/ContactEdit. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The custom component that renders the input fields in the Name section defines a single, required attribute of type Contact. <apex:attribute name="Contact" type="Contact" description="The contact to edit" required="true" /> The description of the attribute must always be provided, as this is included in the component reference. The type of the attribute must be a primitive, sObject, one-dimensional list, map, or custom Apex class. The Contact attribute can then be used in merge syntax inside the component. <apex:inputField value="{!Contact.Salutation}"/> <apex:inputField value="{!Contact.FirstName}"/> <apex:inputField value="{!Contact.LastName}"/> The page passes the contact record being managed by the standard controller to the component via the Contact attribute. <c:ContactNameEdit contact="{!Contact}"/> See also The Updating attributes in component controllers recipe in this article shows how a custom component can update an attribute that is a property of the enclosing page controller. Updating attributes in component controllers Updating fields of sObjects passed as attributes to custom components is straightforward, and can be achieved through simple merge syntax statements. This is not so simple when the attribute is a primitive and will be updated by the component controller, as parameters are passed by value, and thus, any changes are made to a copy of the primitive. For example, passing the name field of a contact sObject, rather than the contact sObject itself, would mean that any changes made in the component would not be visible to the containing page. In this situation, the primitive must be encapsulated inside a containing class. The class instance attribute is still passed by value, so it cannot be updated to point to a different instance, but the properties of the instance can be updated. In this recipe, we will create a containing class that encapsulates a Date primitive and a Visualforce component that allows the user to enter the date via day/month/year picklists. A simple Visualforce page and controller will also be created to demonstrate how this component can be used to enter a contact's date of birth. Getting ready This recipe requires a custom Apex class to encapsulate the Date primitive. To do so, perform the following steps: First, create the class that encapsulates the Date primitive by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the DateContainer.cls Apex class from the code download into the Apex Class area. Click on the Save button. How to do it… First, create the custom component controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the DateEditController.cls Apex class from the code download into the Apex Class area. Click on the Save button. Next, create the custom component by navigating to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter DateEdit in the Label field. Accept the default DateEdit that is automatically generated for the Name field. Paste the contents of the DateEdit.component file from the code download into the Visualforce Markup area and click on the Save button. Next, create the Visualforce page controller extension by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the ContactDateEditExt.cls Apex class from the code download into the Apex Class area. Click on the Save button. Finally, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter ContactDateEdit in the Label field. Accept the default ContactDateEdit that is automatically generated for the Name field. Paste the contents of the ContactDateEdit.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the ContactDateEdit.page file and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the ContactDateEdit page: https://<instance>/apex/ContactDateEdit?id=<contact_id>. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com, and <contact_id> is the ID of any contact in your Salesforce instance. The Visualforce page controller declares a DateContainer property that will be used to capture the contact's date of birth. public DateContainer dob {get; set;} private Contact cont; private ApexPages.StandardController stdCtrl {get; set;} public ContactDateEditExt(ApexPages.StandardController std) { stdCtrl=std; cont=(Contact) std.getRecord(); dob=new DateContainer(cont.BirthDate); } Note that as DateContainer is a class, it must be instantiated when the controller is constructed. The custom component that manages the Date of Birth section defines the following two attributes: A required attribute of type DateContainer, which is assigned to the dateContainer property of the controller The title of for the page block section that will house the picklists; as this is a reusable component, the page supplies an appropriate title Note that this component is not tightly coupled with a contact date of birth field; it may be used to manage a date field for any sObject. <apex:attribute type="DateContainer" name="dateContainerAtt" description="The date" assignTo="{!dateContainer}" required="true" /> <apex:attribute type="String" description="Page block section title" name="title" /> The component controller defines properties for each of the day, month, and year elements of the date. Each setter for these properties attempts to construct the date if all of the other elements are present. This is required as there is no guarantee of the order in which the setters will be called when the Save button is clicked and the postback takes place. public Integer year {get; set { year=value; updateContainer(); } } private void updateContainer() { if ( (null!=year) && (null!=month) && (null!=day) ) { Date theDate=Date.newInstance(year, month, day); dateContainer.value=theDate; } } When the contained date primitive is changed in the updateContainer method, this is reflected in the page controller property, which can then be used to update a field in the contact record. public PageReference save() { cont.BirthDate=dob.value; return stdCtrl.save(); } See also The Passing attributes to components recipe in this article shows how an sObject may be passed as an attribute to a custom component. Passing action methods to components A controller action method is usually invoked from the Visualforce page that it is providing the logic for. However, there are times when it is useful to be able to execute a page controller action method directly from a custom component contained within the page. One example is for styling reasons, in order to locate the command button that executes the action method inside the markup generated by the component. In this recipe we will create a custom component that provides contact edit functionality, including command buttons to save or cancel the edit, and a Visualforce page to contain the component and supply the action methods that are executed when the buttons are clicked. How to do it… This recipe does not require any Apex controllers, so we can start with the custom component. Navigate to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter ContactEdit in the Label field. Accept the default ContactEdit that is automatically generated for the Name field. Paste the contents of the ContactEdit.component file from the code download into the Visualforce Markup area and click on the Save button. Next, create the Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter ContactEditActions in the Label field. Accept the default ContactEditActions that is automatically generated for the Name field. Paste the contents of the ContactEditActions.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the ContactEditActions page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the ContactEditActions page: https://<instance>/apex/ContactEditActions?id=<contact_id>. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com, and <contact_id> is the ID of any contact in your Salesforce instance. The Visualforce page simply includes the custom component, and passes the Save and Cancel methods from the standard controller as attributes. <apex:page standardController="Contact"> <apex:pageMessages /> <apex:form > <c:ContactEdit contact="{!contact}" saveAction="{!save}" cancelAction="{!cancel}" /> </apex:form> </apex:page> The ContactEdit custom component declares attributes for the action methods of type ApexPages.Action. <apex:attribute name="SaveAction" description="The save action method from the page controller" type="ApexPages.Action" required="true"/> <apex:attribute name="CancelAction" description="The cancel action method from the page controller" type="ApexPages.Action" required="true"/> These attributes can then be bound to the command buttons in the component in the same way as if they were supplied by the component's controller. <apex:commandButton value="Save" action="{!SaveAction}" /> <apex:commandButton value="Cancel" action="{!CancelAction}" immediate="true" /> There's more… While this example has used action methods from a standard controller, any action method can be passed to a component using this mechanism, including methods from a custom controller or controller extension. See also The Updating attributes in component controllers recipe in this article shows how a custom component can update an attribute that is a property of the enclosing page controller. Data-driven decimal places Attributes passed to custom components from Visualforce pages can be used wherever the merge syntax is legal. The <apex:outputText /> standard component can be used to format numeric and date values, but the formatting is limited to literal values rather than merge fields. In this scenario, an attribute indicating the number of decimal places to display for a numeric value cannot be used directly in the <apex:outputText /> component. In this recipe we will create a custom component that accepts attributes for a numeric value and the number of decimal places to display for the value. The decimal places attribute determines which optional component is rendered to ensure that the correct number of decimal places is displayed, and the component will also bracket negative values. A Visualforce page will also be created to demonstrate how the component can be used. How to do it… This recipe does not require any Apex controllers, so we can start with the custom component. Navigate to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter DecimalPlaces in the Label field. Accept the default DecimalPlaces that is automatically generated for the Name field. Paste the contents of the DecimalPlaces.component file from the code download into the Visualforce Markup area and click on the Save button. Next, create the Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter DecimalPlacesDemo in the Label field. Accept the default DecimalPlacesDemo that is automatically generated for the Name field. Paste the contents of the DecimalPlacesDemo.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the DecimalPlacesDemo page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the DecimalPlacesDemo page: https://<instance>/apex/DecimalPlacesDemo. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Visualforce page iterates a number of opportunity records and delegates to the component to output the opportunity amount, deriving the number of decimal places from the amount. <c:DecimalPlaces dp="{!TEXT(MOD(opp.Amount/10000, 5))}" value="{!opp.Amount}" /> The component conditionally renders the appropriate output panel, which contains two conditionally rendered <apex:outputText /> components, one to display a positive value to the correct number of decimal places and another to display a bracketed negative value. <apex:outputPanel rendered="{!dp=='1'}"> <apex:outputText rendered="{!AND(NOT(ISNULL(VALUE)), value>=0)}" value="{0, number, #,##0.0}"> <apex:param value="{!value}"/> </apex:outputText> <apex:outputText rendered="{!AND(NOT(ISNULL(VALUE)), value<0)}" value="({0, number, #,##0.0})"> <apex:param value="{!ABS(value)}"/> </apex:outputText> </apex:outputPanel>
Read more
  • 0
  • 0
  • 4420
article-image-high-availability-protection-and-recovery-using-microsoft-azure
Packt
02 Apr 2015
23 min read
Save for later

High Availability, Protection, and Recovery using Microsoft Azure

Packt
02 Apr 2015
23 min read
Microsoft Azure can be used to protect your on-premise assets such as virtual machines, applications, and data. In this article by Marcel van den Berg, the author of Managing Microsoft Hybrid Clouds, you will learn how to use Microsoft Azure to store backup data, replicate data, and even for orchestration of a failover and failback of a complete data center. We will focus on the following topics: High Availability in Microsoft Azure Introduction to geo-replication Disaster recovery using Azure Site Recovery (For more resources related to this topic, see here.) High availability in Microsoft Azure One of the most important limitations of Microsoft Azure is the lack of an SLA for single-instance virtual machines. If a virtual machine is not part of an availability set, that instance is not covered by any kind of SLA. The reason for this is that when Microsoft needs to perform maintenance on Azure hosts, in many cases, a reboot is required. Reboot means the virtual machines on that host will be unavailable for a while. So, in order to accomplish High Availability for your application, you should have at least two instances of the application running at any point in time. Microsoft is working on some sort of hot patching which enables virtual machines to remain active on hosts being patched. Details are not available at the moment of writing. High Availability is a crucial feature that must be an integral part of an architectural design, rather than something that can be "bolted on" to an application afterwards. Designing for High Availability involves leveraging both the development platform as well as available infrastructure in order to ensure an application's responsiveness and overall reliability. The Microsoft Azure Cloud platform offers software developers PaaS extensibility features and network administrators IaaS computing resources that enable availability to be built into an application's design from the beginning. The good news is that organizations with mission-critical applications can now leverage core features within the Microsoft Azure platform in order to deploy highly available, scalable, and fault-tolerant cloud services that have been shown to be more cost-effective than traditional approaches that leverage on-premises systems. Microsoft Failover Clustering support Windows Server Failover Clustering (WSFC) is not supported on Azure. However, Microsoft does support SQL Server AlwaysOn Availability Groups. For AlwaysOn Availability Groups, there is currently no support for availability group listeners in Azure. Also, you must work around a DHCP limitation in Azure when creating WSFC clusters in Azure. After you create a WSFC cluster using two Azure virtual machines, the cluster name cannot start because it cannot acquire a unique virtual IP address from the DHCP service. Instead, the IP address assigned to the cluster name is a duplicate address of one of the nodes. This has a cascading effect that ultimately causes the cluster quorum to fail, because the nodes cannot properly connect to one another. So if your application uses Failover Clustering, it is likely that you will not move it over to Azure. It might run, but Microsoft will not assist you when you encounter issues. Load balancing Besides clustering, we can also create highly available nodes using load balancing. Load balancing is useful for stateless servers. These are servers that are identical to each other and do not have a unique configuration or data. When two or more virtual machines deliver the same application logic, you will need a mechanism that is able to redirect network traffic to those virtual machines. The Windows Network Load Balancing (NLB) feature in Windows Server is not supported on Microsoft Azure. An Azure load balancer does exactly this. It analyzes incoming network traffic of Azure, determines the type of traffic, and reroutes it to a service.   The Azure load balancer is running provided as a cloud service. In fact, this cloud service is running on virtual appliances managed by Microsoft. These are completely software-defined. The moment an administrator adds an endpoint, a set of load balancers is instructed to pass incoming network traffic on a certain port to a port on a virtual machine. If a load balancer fails, another one will take over. Azure load balancing is performed at layer 4 of the OSI mode. This means the load balancer is not aware of the application content of the network packages. It just distributes packets based on network ports. To load balance over multiple virtual machines, you can create a load-balanced set by performing the following steps: In Azure Management Portal, select the virtual machine whose service should be load balanced. Select Endpoints in the upper menu. Click on Add. Select Add a stand-alone endpoint and click on the right arrow. Select a name or a protocol and set the public and private port. Enable create a load-balanced set and click on the right arrow. Next, fill in a name for the load-balanced set. Fill in the probe port, the probe interval, and the number of probes. This information is used by the load balancer to check whether the service is available. It will connect to the probe port number; do that according to the interval. If the specified number of probes all result in unable to connect, the load balancer will no longer distribute traffic to this virtual machine. Click on the check mark. The load balancing mechanism available is based on a hash. Microsoft Azure Load Balancer uses a five tuple (source IP, source port, destination IP, destination port, and protocol type) to calculate the hash that is used to map traffic to the available servers. A second load balancing mode was introduced in October 2014. It is called Source IP Affinity (also known as session affinity or client IP affinity). On using Source IP affinity, connections initiated from the same client computer go to the same DIP endpoint. These load balancers provide high availability inside a single data center. If a virtual machine part of a cluster of instances fails, the load balancer will notice this and remove that virtual machine IP address from a table. However, load balancers will not protect for failure of a complete data center. The domains that are used to direct clients to an application will route to a particular virtual IP that is bound to an Azure data center. To access application even if an Azure region has failed, you can use Azure Traffic Manager. This service can be used for several purposes: To failover to a different Azure region if a disaster occurs To provide the best user experience by directing network traffic to Azure region closest to the location of the user To reroute traffic to another Azure region whenever there's any planned maintenance The main task of Traffic Manager is to map a DNS query to an IP address that is the access point of a service. This job can be compared for example with a job of someone working with the X-ray machine at an airport. I'm guessing that you have all seen those multiple rows of X-ray machines. Each queue at an X-ray machine is different at any moment. An officer standing at the entry of the area distributes people over the available X-rays machine such that all queues remain equal in length. Traffic Manager provides you with a choice of load-balancing methods, including performance, failover, and round-robin. Performance load balancing measures the latency between the client and the cloud service endpoint. Traffic Manager is not aware of the actual load on virtual machines servicing applications. As Traffic Manager resolved endpoints of Azure cloud services only, it cannot be used for load balancing between an Azure region and a non-Azure region (for example, Amazon EC2) or between on-premises and Azure services. It will perform health checks on a regular basis. This is done by querying the endpoints of the services. If the endpoint does not respond, Traffic Manager will stop distributing network traffic to that endpoint for as long as the state of the endpoint is unavailable. Traffic Manager is available in all Azure regions. Microsoft charges for using this service based on the number of DNS queries that are received by Traffic Manager. As the service is attached to an Azure subscription, you will be required to contact Azure support to transfer Traffic Manager to a different subscription. The following table shows the difference between Azure's built-in load balancer and Traffic Manager:   Load balancer Traffic Manager Distribution targets Must reside in same region Can be across regions Load balancing 5 tuple Source IP Affinity  Performance, failover, and round-robin Level OSI layer 4 TCP/UDP ports OSI Layer 4 DNS queries Third-party load balancers In certain configurations, the default Azure load balancer might not be sufficient. There are several vendors supporting or starting to support Azure. One of them is Kemp Technologies. Kemp Technologies offers a free load balancer for Microsoft Azure. The Virtual LoadMaster (VLM) provides layer 7 application delivery. The virtual appliance has some limitations compared to the commercially available unit. The maximum bandwidth is limited to 100 Mbps and High Availability is not offered. This means the Kemp LoadMaster for Azure free edition is a single point of failure. Also, the number of SSL transactions per second is limited. One of the use cases in which a third-party load balancer is required is when we use Microsoft Remote Desktop Gateway. As you might know, Citrix has been supporting the use of Citrix XenApp and Citrix XenDesktop running on Azure since 2013. This means service providers can offer cloud-based desktops and applications using these Citrix solutions. To make this a working configuration, session affinity is required. Session affinity makes sure that network traffic is always routed over the same server. Windows Server 2012 Remote Desktop Gateway uses two HTTP channels, one for input and one for output, which must be routed over the same Remote Desktop Gateway. The Azure load balancer is only able to do round-robin load balancing, which does not guarantee both channels using the same server. However, hardware and software load balancers that support IP affinity, cookie-based affinity, or SSL ID-based affinity (and thus ensure that both HTTP connections are routed to the same server) can be used with Remote Desktop Gateway. Another use case is load balancing of Active Directory Federation Services (ADFS). Microsoft Azure can be used as a backup for on-premises Active Directory (AD). Suppose your organization is using Office 365. To provide single sign-on, a federation has been set up between Office 365 directory and your on-premises AD. If your on-premises ADFS fails, external users would not be able to authenticate. By using Microsoft Azure for ADFS, you can provide high availability for authentication. Kemp LoadMaster for Azure can be used to load balance network traffic to ADFS and is able to do proper load balancing. To install Kemp LoadMaster, perform the following steps: Download the Publish Profile settings file from https://windows.azure.com/download/publishprofile.aspx. Use PowerShell for Azure with the Import-AzurePublishSettingsFile command. Upload the KEMP supplied VHD file to your Microsoft Azure storage account. Publish the VHD as an image. The VHD will be available as an image. The image can be used to create virtual machines. The complete steps are described in the documentation provided by Kemp. Geo-replication of data Microsoft Azure has geo-replication of Azure Storage enabled by default. This means all of your data is not only stored at three different locations in the primary region, but also replicated and stored at three different locations at the paired region. However, this data cannot be accessed by the customer. Microsoft has to declare a data center or storage stamp as lost before Microsoft will failover to the secondary location. In the rare circumstance where a failed storage stamp cannot be recovered, you will experience many hours of downtime. So, you have to make sure you have your own disaster recovery procedures in place. Zone Redundant Storage Microsoft offers a third option you can use to store data. Zone Redundant Storage (ZRS) is a mix of two options for data redundancy and allows data to be replicated to a secondary data center / facility located in the same region or to a paired region. Instead of storing six copies of data like geo-replicated storage does, only three copies of data are stored. So, ZRS is a mix of local redundant storage and geo-replicated storage. The cost for ZRS is about 66 percent of the cost for GRS. Snapshots of the Microsoft Azure disk Server virtualization solutions such as Hyper-V and VMware vSphere offer the ability to save the state of a running virtual machine. This can be useful when you're making changes to the virtual machine but want to have the ability to reverse those changes if something goes wrong. This feature is called a snapshot. Basically, a virtual disk is saved by marking it as read only. All writes to the disk after a snapshot has been initiated are stored on a temporary virtual disk. When a snapshot is deleted, those changes are committed from the delta disk to the initial disk. While the Microsoft Azure Management Portal does not have a feature to create snapshots, there is an ability to make point-in-time copies of virtual disks attached to virtual machines. Microsoft Azure Storage has the ability of versioning. Under the hood, this works differently than snapshots in Hyper-V. It creates a snapshot blob of the base blob. Snapshots are by no ways a replacement for a backup, but it is nice to know you can save the state as well as quickly reverse if required. Introduction to geo-replication By default, Microsoft replicates all data stored on Microsoft Azure Storage to the secondary location located in the paired region. Customers are able to enable or disable the replication. When enabled, customers are charged. When Geo Redundant Storage has been enabled on a storage account, all data is asynchronous replicated. At the secondary location, data is stored on three different storage nodes. So even when two nodes fail, the data is still accessible. However, before the read access Geo-Redundant feature was available, customers had no way to actually access replicated data. The replicated data could only be used by Microsoft when the primary storage could not be recovered again. Microsoft will try everything to restore data in the primary location and avoid a so-called geo-failover process. A geo-failover process means that a storage account's secondary location (the replicated data) will be configured as the new primary location. The problem is that a geo-failover process cannot be done per storage account, but needs to be done at the storage stamp level. A storage stamp has multiple racks of storage nodes. You can imagine how much data and how many customers are involved when a storage stamp needs to failover. Failover will have an effect on the availability of applications. Also, because of the asynchronous replication, some data will be lost when a failover is performed. Microsoft is working on an API that allows customers to failover a storage account themselves. When geo-redundant replication is enabled, you will only benefit from it when Microsoft has a major issue. Geo-redundant storage is neither a replacement for a backup nor for a disaster recovery solution. Microsoft states that the Recover Point Objective (RPO) for Geo Redundant Storage will be about 15 minutes. That means if a failover is required, customers can lose about 15 minutes of data. Microsoft does not provide a SLA on how long geo-replication will take. Microsoft does not give an indication for the Recovery Time Objective (RTO). The RTO indicates the time required by Microsoft to make data available again after a major failure that requires a failover. Microsoft once had to deal with a failure of storage stamps. They did not do a failover but it took many hours to restore the storage service to a normal level. In 2013, Microsoft introduced a new feature called Read Access Geo Redundant Storage (RA-GRS). This feature allows customers to perform reads on the replicated data. This increases the read availability from 99.9 percent when GRS is used to above 99.99 percent when RA-GRS is enabled. Microsoft charges more when RA-GRS is enabled. RA-GRS is an interesting addition for applications that are primarily meant for read-only purposes. When the primary location is not available and Microsoft has not done a failover, writes are not possible. The availability of the Azure Virtual Machine service is not increased by enabling RA-GRS. While the VHD data is replicated and can be read, the virtual machine itself is not replicated. Perhaps this will be a feature for the future. Disaster recovery using Azure Site Recovery Disaster recovery has always been on the top priorities for organizations. IT has become a very important, if not mission-critical factor for doing business. A failure of IT could result in loss of money, customers, orders, and brand value. There are many situations that can disrupt IT such as: Hurricanes Floods Earthquakes Disasters such as a failure of a nuclear power plant Fire Human error Outbreak of a virus Hardware or software failure While these threads are clear and the risk of being hit by such a thread can be calculated, many organizations do not have a proper protection against those threads. In three different situations, disaster recovery solutions can help an organization to continue doing business: Avoiding a possible failure of IT infrastructure by moving servers to a different location. Avoiding a disaster situation, such as hurricanes or floods, since such situations are generally well known in advance due to weather forecasting capabilities. Recovering as quickly as possible when a disaster has hit the data center. Disaster recovery is done when a disaster unexpectedly hit the data center, such as a fire, hardware error, or human error. Some reasons for not having a proper disaster recovery plan are complexity, lack of time, and ignorance; however, in most cases, a lack of budget and the belief that disaster recovery is expensive are the main reasons. Almost all organizations that have been hit by a major disaster causing unacceptable periods of downtime started to implement a disaster recovery plan, including technology immediately after they recovered. However, in many cases, this insight came too late. According to Gartner, 43 percent of companies experiencing disasters never reopen and 29 percent close within 2 years. Server virtualization has made disaster recovery a lot easier and cost effective. Verifying that your DR procedure actually works as designed and matches RTO and RPO is much easier using virtual machines. Since Windows Server 2012, Hyper-V has a feature for asynchronous replication of virtual machine virtual disks to another location. This feature, Hyper-V Replica, is very easy to enable and configure. It does not cost extra. Hyper-V Replica is storage agnostic, which means the storage type at the primary site can be different than the storage type used in the secondary site. So, Hyper-V Replica perfectly works when your virtual machines are hosted on, for example, EMC storage while in the secondary a HP solution is used. While replication is a must for DR, another very useful feature in DR is automation. As an administrator, you really appreciate the option to click on a button after deciding to perform a failover and sit back and relax. Recovery is mostly a stressful job when your primary location is flooded or burned and lots of things can go wrong if recovery is done manually. This is why Microsoft designed Azure Site Recovery. Azure Site Recovery is able to assist in disaster recovery in several scenarios: A customer has two data centers both running Hyper-V managed by System Center Virtual Machine Manager. Hyper-V Replica is used to replicate data at the virtual machine level. A customer has two data centers both running Hyper-V managed by System Center Virtual Machine Manager. NetApp storage is used to replicate between two sites at the storage level. A customer has a single data center running Hyper-V managed by System Center Virtual Machine Manager. A customer has two data centers both running VMware vSphere. In this case InMage Scout software is used to replicate between two datacenters. Azure is not used for orchestration. A customer has a single data centers not managed by System Center Virtual Machine Manager. In the second scenario, Microsoft Azure is used as a secondary data center if a disaster makes the primary data center unavailable. Microsoft announced also to support a scenario where vSphere is used on-premises and Azure Site Recovery can be used to replicate data to Azure. To enable this InMage software will be used. Details were not available at the time this article was written. In the first two described scenarios Site Recovery is used to orchestrate the failover and failback to the secondary location. The management is done using Azure Management Portal. This is available using any browser supporting HTML5. So a failover can be initiated even from a tablet or smartphone. Using Azure as a secondary data center for disaster recovery Azure Site Recovery went into preview in June 2014. For organizations using Hyper-V, there is no direct need to have a secondary data center as Azure can be used as a target for Hyper-V Replica. Some of the characteristics of the service are as follows: Allows nondisruptive disaster recovery failover testing Automated reconfigure of network configuration of guests Storage agnostic supports any type of on-premises storage supported by Hyper-V Support for VSS to enable application consistency Protects more than 1,000 virtual machines (Microsoft tested with 2,000 virtual machines and this went well) To be able to use Site Recovery, customers do not have to use System Center Virtual Machine Manager. Site Recovery can be used without this installed. System Center Virtual Machine Manager. Site Recovery will use information such as? virtual networks provided by SCVMM to map networks available in Microsoft Azure. Site Recovery does not support the ability to send a copy of the virtual hard disks on removable media to an Azure data center to prevent the initial replication using WAN (seeding). Customers will need to transfer all the replication data over the network. ExpressRoute will help to get a much better throughput compared to a site-to-site VPN over the Internet. Failover to Azure can be as simple as clicking on a single button. Site Recovery will then create new virtual machines in Azure and start the virtual machines in the order defined in the recovery plan. A recovery plan is a workflow that defines the startup sequence of virtual machines. It is possible to stop the recovery plan to allow a manual check, for example. If all is okay, the recovery plan will continue doing its job. Multiple recovery plans can be created. Microsoft Volume Shadow Copy Services (VSS) is supported. This allows application consistency. Replication of data can be configured at intervals of 15 seconds, 5 minutes, or 15 minutes. Replication is performed asynchronously. For recovery, 24 recovery points are available. These are like snapshots or point-in-time copies. If the most recent replica cannot be used (for example, because of damaged data), another replica can be used for restore. You can configure extended replication. In extended replication, your Replica server forwards changes that occur on the primary virtual machines to a third server (the extended Replica server). After a planned or unplanned failover from the primary server to the Replica server, the extended Replica server provides further business continuity protection. As with ordinary replication, you configure extended replication by using Hyper-V Manager, Windows PowerShell (using the –Extended option), or WMI. At the moment, only VHD virtual disk format is supported. Generation 2 virtual machines that can be created on Hyper-V are not supported by Site Recovery. Generation 2 virtual machines have a simplified virtual hardware model and support Unified Extensible Firmware Interface (UEFI) firmware instead of BIOS-based firmware. Also, boot from PXE, SCSI hard disk, SCSCI DVD, and Secure Boot are supported in Generation 2 virtual machines. However on March 19 Microsoft responded to numerous customer requests on support of Site Recovery for Generation 2 virtual machines. Site Recovery will soon support Gen 2 VM's. On failover, the VM will be converted to a Gen 1 VM. On failback, the VM will be converted to Gen 2. This conversion is done till the Azure platform natively supports Gen 2 VM's. Customers using Site Recovery are charged only for consumption of storage as long as they do not perform a failover or failover test. Failback is also supported. After running for a while in Microsoft Azure customers are likely to move their virtual machines back to the on-premises, primary data center. Site Recovery will replicate back only the changed data. Mind that customer data is not stored in Microsoft Azure when Hyper-V Recovery Manager is used. Azure is used to coordinate the failover and recovery. To be able to do this, it stores information on network mappings, runbooks, and names of virtual machines and virtual networks. All data sent to Azure is encrypted. By using Azure Site Recovery, we can perform service orchestration in terms of replication, planned failover, unplanned failover, and test failover. The entire engine is powered by Azure Site Recovery Manager. Let's have a closer look on the main features of Azure Site Recovery. It enables three main scenarios: Test Failover or DR Drills: Enable support for application testing by creating test virtual machines and networks as specified by the user. Without impacting production workloads or their protection, HRM can quickly enable periodic workload testing. Planned Failovers (PFO): For compliance or in the event of a planned outage, customers can use planned failovers, virtual machines are shutdown, final changes are replicated to ensure zero data loss, and then virtual machines are brought up in order on the recovery site as specified by the RP. More importantly, failback is a single-click gesture that executes a planned failover in the reverse direction. Unplanned Failovers (UFO): In the event of unplanned outage or a natural disaster, HRM opportunistically attempts to shut down the primary machines if some of the virtual machines are still running when the disaster strikes. It then automates their recovery on the secondary site as specified by the RP. If your secondary site uses a different IP subnet, Site Recovery is able to change the IP configuration of your virtual machines during the failover. Part of the Site Recovery installation is the installation of a VMM provider. This component communicates with Microsoft Azure. Site Recovery can be used even if you have a single VMM to manage both primary and secondary sites. Site Recovery does not rely on availability of any component in the primary site when performing a failover. So it doesn't matter if the complete site including link to Azure has been destroyed, as Site Recovery will be able to perform the coordinated failover. Azure Site Recovery to customer owned sites is billed per protected virtual machine per month. The costs are approximately €12 per month. Microsoft bills for the average consumption of virtual machines per month. So if you are protecting 20 virtual machines in the first half and 0 in the second half, you will be charged for 10 virtual machines for that month. When Azure is used as a target, Microsoft will only charge for consumption of storage during replication. The costs for this scenario are €40.22/month per instance protected. As soon as you perform a test failover or actual failover Microsoft will charge for the virtual machine CPU and memory consumption. Summary Thus this article has covered the concepts of High Availability in Microsoft Azure and disaster recovery using Azure Site Recovery, and also gives an introduction to the concept of geo-replication. Resources for Article: Further resources on this subject: Windows Azure Mobile Services - Implementing Push Notifications using [article] Configuring organization network services [article] Integration with System Center Operations Manager 2012 SP1 [article]
Read more
  • 0
  • 0
  • 4023

article-image-troubleshooting-openstack-cloud-computing
Packt
01 Oct 2012
5 min read
Save for later

Troubleshooting in OpenStack Cloud Computing

Packt
01 Oct 2012
5 min read
Introduction OpenStack is a complex suite of software that can make tracking down issues and faults quite daunting to beginners and experienced system administrators alike. While there is no single approach to troubleshooting systems, understanding where OpenStack logs vital information or what tools are available to help track down bugs will help resolve issues we may encounter. Checking OpenStack Compute Services OpenStack provides tools to check various parts of Compute Services, and we'll use common system commands to check whether our environment is running as expected. Getting ready To check our OpenStack Compute host we must log in to that server, so do this now before following the given steps. How to do it... To check that Nova is running the required services, we invoke the nova-manage tool and ask it various questions of the environment as follows: To check the OpenStack Compute hosts are running OK: sudo nova-manage service list You will see the following output. The :-) icons are indicative that everything is fine. If Nova has a problem: If you see XXX where the :-) icon should be, then you have a problem. Troubleshooting is covered at the end of the book, but if you do see XXX then the answer will be in the logs at /var/log/nova/. If you get intermittent XXX and :-) icons for a service, first check if the clocks are in sync. Checking Glance: Glance doesn't have a tool to check, so we can use some system commands instead. ps -ef | grep glancenetstat -ant | grep 9292.*LISTEN These should return process information for Glance to show it is running and 9292 is the default port that should be open in the LISTEN mode on your server ready for use. Other services that you should check: rabbitmq: sudo rabbitmqctl status The following is an example output from rabbitmqctl when everything is running OK: ntp ( N etwork Time Protocol, for keeping nodes in sync): ntpq -p It should return output regarding contacting NTP servers, for example: MySQL Database Server: MYSQL_PASS=openstackmysqladmin -uroot –p$MYSQL_PASS status This will return some statistics about MySQL, if it is running: How it works... We have used some basic commands that communicate with OpenStack Compute and other services to show they are running. This elementary level of troubleshooting ensures you have the system running as expected. Understanding logging Logging is important in all computer systems, but the more complex the system, the more you rely on being able to spot problems to cut down on troubleshooting time. Understanding logging in OpenStack is important to ensure your environment is healthy and is able to submit relevant log entries back to the community to help fix bugs. Getting ready Log in as the root user onto the appropriate servers where the OpenStack services are installed. How to do it... OpenStack produces a large number of logs that help troubleshoot our OpenStack installations. The following details outline where these services write their logs. OpenStack Compute Services Logs Logs for the OpenStack Compute services are written to /var/log/nova/, which is owned by the nova user, by default. To read these, log in as the root user. The following is a list of services and their corresponding logs: nova-compute: /var/log/nova/nova-compute.log Log entries regarding the spinning up and running of the instances nova-network: /var/log/nova/nova-network.log Log entries regarding network state, assignment, routing, and security groups nova-manage: /var/log/nova/nova-manage.log Log entries produced when running the nova-manage command nova-scheduler: /var/log/nova/nova-scheduler.log Log entries pertaining to the scheduler, its assignment of tasks to nodes, and messages from the queue nova-objectstore: /var/log/nova/nova-objectstore.log Log entries regarding the images nova-api: /var/log/nova/nova-api.log Log entries regarding user interaction with OpenStack as well as messages regarding interaction with other components of OpenStack nova-cert: /var/log/nova/nova-cert.log Entries regarding the nova-cert process nova-console: /var/log/nova/nova-console.log Details about the nova-console VNC service nova-consoleauth: /var/log/nova/nova-consoleauth.log Authentication details related to the nova-console service nova-dhcpbridge: /var/log/nova/nova-dhcpbridge.log Network information regarding the dhcpbridge service OpenStack Dashboard logs OpenStack Dashboard (Horizon) is a web application that runs through Apache by default, so any errors and access details will be in the Apache logs. These can be found in /var/log/ apache2/*.log, which will help you understand who is accessing the service as well as the report on any errors seen with the service. OpenStack Storage logs OpenStack Storage (Swift) writes logs to syslog by default. On an Ubuntu system, these can be viewed in /var/log/syslog. On other systems, these might be available at /var/log/messages. Logging can be adjusted to allow for these messages to be filtered in syslog using the log_level, log_facility, and log_message options. Each service allows you to set the following: If you change any of these options, you will need to restart that service to pick up the change. Log-level settings in OpenStack Compute services Many OpenStack services allow you to control the chatter in the logs by setting different log output settings. Some services, though, tend to produce a lot of DEBUG noise by default. This is controlled within the configuration files for that service. For example, the Glance Registry service has the following settings in its configuration files: Moreover, many services are adopting this facility. In production, you would set debug to False and optionally keep a fairly high level of INFO requests being produced, which may help with the general health reports of your OpenStack environment. How it works... Logging is an important activity in any software, and OpenStack is no different. It allows an administrator to track down problematic activity that can be used in conjunction with the community to help provide a solution. Understanding where the services log, and managing those logs to allow someone to identify problems quickly and easily, are important.
Read more
  • 0
  • 0
  • 3974
Modal Close icon
Modal Close icon