Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Virtualization

115 Articles
article-image-how-install-virtualbox-guest-additions
Packt
28 Apr 2010
6 min read
Save for later

How to Install VirtualBox Guest Additions

Packt
28 Apr 2010
6 min read
In this article by Alfonso V. Romero, author of VirtualBox 3.1: Beginner's Guide, you shall learn what the Guest Additions are and how to install them on Windows, Linux, and Open Solaris virtual machines. Introducing Guest Additions Ok, you have been playing with a couple of virtual machines by now, and I know it feels great to be capable of running two different operating systems on the same machine, but as I said at the beginning of this article, what's the use if you can't share information between your host and guest systems, or if you can't maximize your guest screen? Well, that's what Guest Additions are for. With the Guest Additions installed in a virtual machine, you'll be able to enjoy all of these: Full keyboard and mouse integration between your host and your guest operating systems: This means you won't need to use the capture/uncapture feature anymore! Enhanced video support in your guest virtual machine: You will be able to use 3D and 2D video acceleration features, and if you resize your virtual machine's screen, its video resolution will adjust automatically. Say hello to full screen! Better time synchronization between host and guest: A virtual machine doesn't know it's running inside another computer, so it expects to have 100% of the CPU and all the other resources without any interference. Since the host computer needs to use those resources too, sometimes it can get messy, especially if both host and guest are running several applications at the same time, as would be the case in most situations. But don't worry about it! Guest Additions re-synchronize your virtual machine's time regularly to avoid any serious problems. Shared folders: This is one of my favorite Guest Addition features! You can designate one or more folders to share files easily between your host and your guest, as if they were network shares. Seamless windows: This is another amazing feature that lets you use any application in your guest as if you were running it directly from your host PC. For example, if you have a Linux host and a Windows virtual machine, you'll be able to use MS Word or MS Excel as if you were running it directly from your Linux machine! Shared clipboard: This is a feature I couldn't live without because it lets you copy and paste information between your host and guest applications seamlessly. Automated Windows guest logons: The Guest Additions for Windows provide modules to automate the logon process in a Windows virtual machine. Now that I've shown you that life isn't worth living without the Guest Additions installed in your virtual machines, let's see how to install them on Windows, Linux, and Solaris hosts. Installing Guest Additions for Windows "Hmmm… I still can't see how you plan to get everyone in this office using VirtualBox… you can't even use that darned virtual machine in full screen!" your boss says with a mocking tone of voice. "Well boss, if you stick with me during these article's exercises, you'll get your two cents worth!" you respond back to him, feeling like the new kid in town… Time for action – installing Guest Additions on a Windows XP virtual machine In this exercise, I'll show you how to install Guest Additions on your Windows XP virtual machine so that your boss can stop whining about not being able to use the Windows VM as a real PC... Open VirtualBox, and start your WinXP virtual machine. Press F8 when Windows XP is booting to enter the Windows Advanced Options Menu, select the Safe Mode option, and press Enter to continue: Wait for Windows XP to boot, and then login with your administrator account. The Windows is running in safe mode dialog will show up. Click on Yes to continue, and wait for Windows to finish booting up. Then select the Devices option from VirtualBox's menu bar, and click on the Install Guest Additions option: VirtualBox will mount the Guest Additions ISO file on your virtual machine's CD/DVD-ROM drive, and the Guest Additions installer will start automatically: Click on Next to continue. The License Agreement dialog will appear next. Click on the I Agree button to accept the agreement and continue. Leave the default destination folder on the Choose Install Location dialog, and click on Next to continue. The Choose Components dialog will appear next. Select the Direct 3D Support option to enable it, and click on Install to continue: Guest Additions will begin to install in your virtual machine. The next dialog will inform you that the Guest Additions setup program replaced some Windows system files, and if you receive a warning dialog from the Windows File Protection mechanism, you need to click on the Cancel button of that warning dialog to avoid restoring the original files. Click on OK to continue. The setup program will ask if you want to reboot your virtual machine to complete the installation process. Make sure the Reboot now option is enabled, and click on Finish to continue. Your WinXP virtual machine will reboot automatically, and once it has finished booting up, a VirtualBox - Information dialog will appear, to tell you that your guest operating system now supports mouse pointer integration. Enable the Do not show this message again, and click on the OK button to continue. Now you'll be able to move your mouse freely between your virtual machine's area and your host machine's area! You'll also be able to resize your virtual machine's screen, and it will adjust automatically! What just happened? Hey, it was pretty simple and neat, huh? Now you can start to get the real juice out of your Windows virtual machines! And your boss will never complain again about your wonderful idea of virtualizing your office environment with VirtualBox! The Guest Additions installation process on Windows virtual machines is a little bit more complicated than its Linux or OpenSolaris counterparts due to the fact that Windows has a file protection mechanism that interferes with some system files VirtualBox needs to replace. By using 'Safe Mode', VirtualBox can override the file protection mechanism, and the Guest Additions software can be installed successfully. But if you don't want or need 3D guest support, you can install Guest Additions in Windows, normal mode.
Read more
  • 0
  • 0
  • 49825

article-image-design-documentation
Packt
23 Jan 2014
19 min read
Save for later

The Design Documentation

Packt
23 Jan 2014
19 min read
(For more resources related to this topic, see here.) The design documentation provides written documentation of the design factors and the choices the architect has made in the design to satisfy the business and technical requirements. The design documentation also aids in the implementation of the design. In many cases where the design architect is not responsible for the implementation, the design documents ensure the successful implementation of the design by the implementation engineer. Once you have created the documentation for a few designs, you will be able to develop standard processes and templates to aid in the creation of design documentation. Documentation can vary from project to project. Many consulting companies and resellers have standard documentation templates that they use when designing solutions. A properly documented design should include the following information: Architecture design Implementation plan Installation guide Validation test plan Operational procedures This information can be included in a single document or separated into different documents. VMware provides Service Delivery Kits to VMware partners. These kits can be found on the VMware Partner University portal at http://www.vmware.com/go/partneruniversity, which provides documentation templates that can be used as a foundation for creating design documents. If you do not have access to these templates, example outlines are provided in this article to assist you in developing your own design documentation templates. The final steps of the design process include gaining customer approval to begin implementation of the design and the implementation of the design. Creating the architecture design document The architecture design document is a technical document describing the components and specifications required to support the solution and ensure that the specific business and technical requirements of the design are satisfied. An excellent example of an architecture design document is the Cloud Infrastructure Architecture Case Study White Paper article that can be found at http://www.vmware.com/files/pdf/techpaper/cloud-infrastructure-achitecture-case-study.pdf. The architect creates the architecture design document to document the design factors and the specific choices that have been made to satisfy those factors. The document serves as a way for the architect to show his work when making design decisions. The architecture design document includes the conceptual, logical, and physical designs. How to do it... The architecture design document should include the following information: Purpose and overview Executive summary Design methodology Conceptual design Logical management, storage, compute, and network design Physical management, storage, compute, and network design How it works... The Purpose and Overview section of the architecture design includes the Executive Summary section. The Executive Summary section provides a high-level overview of the design and the goals the design will accomplish, and defines the purpose and scope of the architecture design document. The following is an example executive summary in the Cloud Infrastructure Architecture Case Study White Paper : Executive Summary: This architecture design was developed to support a virtualization project to consolidate 100 existing physical servers on to a VMware vSphere 5.x virtual infrastructure. The primary goals this design will accomplish are to increase operational efficiency and to provide high availability of customer-facing applications. This document details the recommended implementation of a VMware virtualization architecture based on specific business requirements and VMware recommended practices. The document provides both logical and physical design considerations for all related infrastructure components including servers, storage, networking, management, and virtual machines. The scope of this document is specific to the design of the virtual infrastructure and the supporting components. The purpose and overview section should also include details of the design methodology the architect has used in creating the architecture design. This should include the processes followed to determine the business and technical requirements along with definitions of the infrastructure qualities that influenced the design decisions. Design factors, requirements, constraints, and assumptions are documented as part of the conceptual design. To document the design factors, use a table to organize them and associate them with an ID that can be easily referenced. The following table illustrates an example of how to document the design requirements: ID Requirement R001 Consolidate the existing 100 physical application servers down to five servers R002 Provide capacity to support growth for 25 additional application servers over the next five years R003 Server hardware maintenance should not affect application uptime R004 Provide N+2 redundancy to support a hardware failure during normal and maintenance operations The conceptual design should also include tables documenting any constraints and assumptions. A high-level diagram of the conceptual design can also be included. Details of the logical design are documented in the architecture design document. The logical design of management, storage, network, and compute resources should be included. When documenting the logical design document, any recommended practices that were followed should be included. Also include references to the requirements, constraints, and assumptions that influenced the design decisions. When documenting the logical design, show your work to support your design decisions. Include any formulas used for resource calculations and provide detailed explanations of why design decisions were made. An example table outlining the logical design of compute resource requirements is as follows: Parameter Specification Current CPU resources required 100 GHz *CPU growth 25 GHz CPU required (75 percent utilization) 157 GHz Current memory resources required 525 GB *Memory growth 131 GB Memory required (75 percent utilization) 821 GB Memory required (25 percent TPS savings) 616 GB *CPU and memory growth of 25 additional application servers (R002) Similar tables will be created to document the logical design for storage, network, and management resources. The physical design documents have the details of the physical hardware chosen along with the configurations of both the physical and virtual hardware. Details of vendors and hardware models chosen and the reasons for decisions made should be included as part of the physical design. The configuration of the physical hardware is documented along with the details of why specific configuration options were chosen. The physical design should also include diagrams that document the configuration of physical resources, such as physical network connectivity and storage layout. A sample outline of the architecture design document is as follows: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of the document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose and overview: This section consists of an executive summary to provide an overview of the design and the design methodology followed in creating the design Conceptual design: It is the documentation of the design factors: requirements, constraints, and assumptions Logical design: It has the details of the logical management, storage, network, and compute design Physical design: It contains the details of the selected hardware and the configuration of the physical and virtual hardware Writing an implementation plan The implementation plan documents the requirements necessary to complete the implementation of the design. The implementation plan defines the project roles and defines what is expected of the customer and what they can expect during the implementation of the design. This document is sometimes referred to as the statement of work. It defines the key points of contact, the requirements that must be satisfied to start the implementation, any project documentation deliverables, and how changes to the design and implementation will be handled. How to do it... The implementation plan should include the following information: Purpose statement Project contacts Implementation requirements Overview of implementation steps Definition of project documentation deliverables Implementation of change management How it works... The purpose statement defines the purpose and scope of the document. The purpose statement of the implementation plan should define what is included in the document and provide a brief overview of the goals of the project. The purpose statement is simply an introduction so that someone reading the document can gain a quick understanding of what the document contains. The following is an example purpose statement: This document serves as the implementation plan and defines the scope of the virtualization project. This document identifies points of contact for the project, lists implementation requirements, provides a brief description of each of the document deliverables, deliverables, and provides an overview of the implementation process for the data-center virtualization project. The scope of this document is specific to the implementation of the virtual data-center implementation and the supporting components as defined in the Architecture Design. Key project contacts, their roles, and their contact information should be included as part of the implementation plan document. These contacts include customer stakeholders, project managers, project architects, and implementation engineers. The following is a sample table that can be used to document project contacts for the implementation plan: Role Name Contact information Customer project sponsor     Customer technical resource     Project manager     Design architect     Implementation engineer     QA engineer     Support contacts for hardware and software used in the implementation plan may also be included in the table, for example, contact numbers for VMware support or other vendor support. Implementation requirements contain the implementation dependencies to include the access and facility requirements. Any hardware, software, and licensing that must be available to implement the design is also documented here. Access requirements include the following: Physical access to the site. Credentials necessary for access to resources. These include active directory credentials and VPN credentials (if remote access is required). Facility requirements include the following: Power and cooling to support the equipment that will be deployed as part of the design Rack space requirements Hardware, software, and licensing requirements include the following: vSphere licensing Windows or other operating system licensing Other third-party application licensing Software (ISO, physical media, and so on) Physical hardware (hosts, array, network switches, cables, and so on) A high-level overview of the steps required to complete the implementation is also documented. The details of each step are not a part of this document; only the steps that need to be performed will be included. For example: Procurement of hardware, software, and licensing. Scheduling of engineering resources. Verification of access and facility requirements. Performance of an inventory check for the required hardware, software, and licensing. Installation and configuration of storage array. Rack, cable, and burn-in of physical server hardware. Installation of ESXi on physical servers. Installation of vCenter Server. Configuration of ESXi and vCenter. Testing and verification of implementation plan. Migration of physical workloads to virtual machines. Operational verification of the implementation plan. The implementation overview may also include an implementation timeline documenting the time required to complete each of the steps. Project documentation deliverables are defined as part of the implementation plan. Any documentation that will be delivered to the customer once the implementation has been completed should be detailed here. Details include the name of the document and a brief description of the purpose of the document. The following table provides example descriptions of the project documentation deliverables: Document Description Architecture design This is a technical document describing the vSphere components and specifications required to achieve a solution that addresses the specific business and technical requirements of the design. Implementation plan This identifies implementation roles and requirements. It provides a high-level map of the implementation and deliverables detailed in the design. It documents change management procedures. Installation guide This document provides detailed, step-by-step instructions on how to install and configure the products specified in the architecture design document. Validation test plan This document provides an overview of the procedures to be executed post installation to verify whether or not the infrastructure is installed correctly. It can also be used at any point subsequent to the installation to verify whether or not the infrastructure continues to function correctly. Operational procedures This document provides detailed, step-by-step instructions on how to perform common operational tasks after the design is implemented. How changes are made to the design, specifically changes made to the design factors, must be well documented. Even a simple change to a requirement or an assumption that cannot be verified can have a tremendous effect on the design and implementation. The process for submitting a change, researching the impact of the change, and approving the change should be documented in detail. The following is an example outline for an implementation plan: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Project contacts: It is the documentation of key project points of contact Implementation requirements: It provides the access, facilities, hardware, software, and licensing required to complete the implementation Implementation overview: It is the overview of the steps required to complete the implementation Project deliverables: It consists of the documents that will be provided as deliverables once implementation has been completed Developing an installation guide The installation guide provides step-by-step instructions for the implementation of the architecture design. This guide should include detailed information about how to implement and configure all the resources associated with the virtual datacenter project. In many projects, the person creating the design is not the person responsible for implementing the design. The installation guide outlines the steps necessary to implement the physical design outlined in the architecture design document. The installation guide should provide details about the installation of all components, including the storage and network configurations required to support the design. In a complex design, multiple installation guides may be created to document the installation of the various components required to support the design. For example, separate installation guides may be created for the storage, network, and vSphere installation and configuration. How to do it... The installation guide should include the following information: Purpose statement Assumption statement Step-by-step instructions to implement the design How it works... The purpose statement simply states the purpose of the document. The assumption statement describes any assumptions the document's author has made. Commonly, an assumption statement simply states that the document has been written, assuming that the reader is familiar with virtualization concepts and the architecture design. The following is an example of a basic purpose and assumption statement that can be used for an installation guide: Purpose: This document provides a guide for installing and configuring the virtual infrastructure design defined in the Architecture Design. Assumptions: This guide is written for an implementation engineer or administrator who is familiar with vSphere concepts and terminologies. The guide is not intended for administrators who have no prior knowledge of vSphere concepts and terminology. The installation guide should include details on implementing all areas of the design. It should include configuration of the storage array, physical servers, physical network components, and vSphere components. The following are just a few examples of installation tasks to include instructions for: Storage array configurations Physical network configurations Physical host configurations ESXi installation vCenter Server installation and configuration Virtual network configuration Datastore configuration High availability, distributed resource scheduler, storage DRS, and other vSphere components installation and configuration The installation guide should provide as much detail as possible. Along with the step-by-step procedures, screenshots can be used to provide installation guidance. The following screenshot is an example taken from an installation guide that details enabling and configuring the Software iSCSI adapter: The following is an example outline for an installation guide: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Assumption statement: It defines any assumptions made in creating the document Installation guide: It provides the step-by-step installation instructions to be followed when implementing the design Creating a validation test plan The validation test plan documents how the implementation will be verified. It documents the criteria that must be met to determine the success of the implementation and the test procedures that should be followed when validating the environment. The criteria and procedures defined in the validation test plan determine whether or not the design requirements have been successfully met. How to do it... The validation test plan should include the following information: Purpose statement Assumption statement Success criteria Test procedures How it works... The purpose statement defines the purpose of the validation test plan and the assumption statement documents any assumptions the author of the plan has made in developing the test plan. Typically, the assumptions are that the testing and validation will be performed by someone who is familiar with the concepts and the design. The following is an example of a purpose and assumption statement for a validation test plan: Purpose: This document contains testing procedures to verify that the implemented configurations specified in the Architecture Design document successfully addresses the customer requirements. Assumptions: This document assumes that the person performing these tests has a basic understanding of VMware vSphere and is familiar with the accompanying design documentation. This document is not intended for administrators or testers who have no prior knowledge of vSphere concepts and terminology. The success criteria determines whether or not the implemented design is operating as expected. More importantly, these criteria determine whether or not the design requirements have been met. Success is measured based on whether or not the criteria satisfies the design requirements. The following table shows some examples of success criteria defined in the validation test plan: Description Measurement Members of the active directory group vSphere administrators are able to access vCenter as administrators Yes/No Access is denied to users outside the vSphere administrators active directory group Yes/No Access to a host using the vSphere Client is permitted when lockdown mode is disabled Yes/No Access to a host using the vSphere Client is denied when lockdown mode is enabled Yes/No Cluster resource utilization is less than 75 percent. Yes/No If the success criteria are not met, the design does not satisfy the design factors. This can be due to a misconfiguration or error in the design. Troubleshooting will need to be done to identify the issue or modifications to the design may need to be made. Test procedures are performed to determine whether or not the success criteria have been met. Test procedures should include testing of usability, performance, and recoverability. Test procedures should include the test description, the tasks to perform the test, and the expected results of the test. The following table provides some examples of usability testing procedures: Test description Tasks to perform test Expected result vCenter administrator access Use the vSphere Web Client to access the vCenter Server. Log in as a user who is a member of the vSphere administrators AD group. Administrator access to the inventory of the vCenter Server vCenter access: No permissions Use the vSphere Web Client to access the vCenter Server. Log in as a user who is not a member of the vSphere administrators AD group. Access is denied Host access: lockdown mode disabled Disable lockdown mode through the DCUI. Use the vSphere Client to access the host and log in as root. Direct access to the host using the vSphere Client is successful Host access: lockdown mode enabled Re-enable lockdown mode through the DCUI. Use the vSphere Client to access the host and log in as root. Direct access to the host using the vSphere Client is denied The following table provides some examples of reliability testing procedures: Test description Tasks to perform test Expected result Host storage path failure Disconnect a vmnic providing IP storage connectivity from the host The disconnected path fails, but I/O continues to be processed on the surviving paths. A network connectivity alarm should be triggered and an e-mail should be sent to the configured e-mail address. Host storage path restore Reconnect the vmnic providing IP storage connectivity The failed path should become active and begin processing the I/O. Network connectivity alarms should clear. Array storage path failure Disconnect one network connection from the active SP The disconnected paths fail on all hosts, but I/O continues to be processed on the surviving paths. Management network redundancy Disconnect the active management network vmnic The stand-by adapter becomes active. Management access to the host is not interrupted. A loss-of-network redundancy alarm should be triggered and an e-mail should be sent to the configured e-mail address. These are just a few examples of test procedures. The actual test procedures will depend on the requirements defined in the conceptual design. The following is an example outline of a validation test plan: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Assumption statement: It defines any assumptions made in creating the document Success criteria: It is a list of criteria that must be met to validate the successful implementation of the design Test Procedures: It is a list of test procedures to follow, including the steps to follow and the expected results
Read more
  • 0
  • 2
  • 47263

article-image-kvm-networking-libvirt
Packt
05 Jul 2017
10 min read
Save for later

KVM Networking with libvirt

Packt
05 Jul 2017
10 min read
In this article by Konstantin Ivanov, author of the book KVM Virtualization Cookbook, we are going to deploy three different network types, explore the network XML format and see examples on how to define and manipulate virtual interfaces for the KVM instances. (For more resources related to this topic, see here.) To be able to connect the virtual machines to the host OS, or to each other, we are going to use the Linux bridge and the Open vSwitch (OVS) daemons, userspace tools and kernel modules. Both software bridging technologies are great at creating software defined networks (SDN) of various complexity, in a consistent and easy to manipulate manner. The Linux bridge and OVS both act as a bridge/switch that the virtual interfaces of the KVM guests can connect to. With all this in mind, lets start by learning more about the software bridges in Linux. The Linux bridge The Linux bridge is a software Layer 2 device that provides some of the functionality of a physical bridge device. It can forward frames between KVM guests, the host OS and virtual machines running on other servers, or networks. The Linux bridge consists of two components - a userspace administration tool that we are going to use in this recipe and a kernel module, that performs all the work of connecting multiple Ethernet segments together. Each software bridge we create can have a number of ports attached to it, where network traffic is forwarded to and from. When creating KVM instances we can attach the virtual interfaces that are associated with them to the bridge, which is similar to plugging a network cable from a physical server's Network Interface Card (NIC) to a bridge/switch device. Being a Layer 2 device, the Linux bridge works with MAC addresses and maintains a kernel structure to keep track of ports and associated MAC addresses in the form of a Content Addressable Memory (CAM) table. In this recipe we are going to create a new Linux bridge and use the brctl utility to manipulate it. Getting Ready For this recipe we are going to need the following: Recent Linux kernel with enabled 802.1d Ethernet Bridging options. To check if your kernel is compiled with those features, or exposed as kernel modules, run: root@kvm:~# cat /boot/config-`uname -r` | grep -i bridg # PC-card bridges CONFIG_BRIDGE_NETFILTER=y CONFIG_NF_TABLES_BRIDGE=m CONFIG_BRIDGE_EBT_BROUTE=m CONFIG_BRIDGE_EBT_T_FILTER=m CONFIG_BRIDGE_EBT_T_NAT=m CONFIG_BRIDGE_EBT_802_3=m CONFIG_BRIDGE_EBT_AMONG=m CONFIG_BRIDGE_EBT_ARP=m CONFIG_BRIDGE_EBT_IP=m CONFIG_BRIDGE_EBT_IP6=m CONFIG_BRIDGE_EBT_LIMIT=m CONFIG_BRIDGE_EBT_MARK=m CONFIG_BRIDGE_EBT_PKTTYPE=m CONFIG_BRIDGE_EBT_STP=m CONFIG_BRIDGE_EBT_VLAN=m CONFIG_BRIDGE_EBT_ARPREPLY=m CONFIG_BRIDGE_EBT_DNAT=m CONFIG_BRIDGE_EBT_MARK_T=m CONFIG_BRIDGE_EBT_REDIRECT=m CONFIG_BRIDGE_EBT_SNAT=m CONFIG_BRIDGE_EBT_LOG=m # CONFIG_BRIDGE_EBT_ULOG is not set CONFIG_BRIDGE_EBT_NFLOG=m CONFIG_BRIDGE=m CONFIG_BRIDGE_IGMP_SNOOPING=y CONFIG_BRIDGE_VLAN_FILTERING=y CONFIG_SSB_B43_PCI_BRIDGE=y CONFIG_DVB_DDBRIDGE=m CONFIG_EDAC_SBRIDGE=m # VME Bridge Drivers root@kvm:~# The bridge kernel module. To verify that the module is loaded and to obtain more information about its version and features, execute: root@kvm:~# lsmod | grep bridge bridge 110925 0 stp 12976 2 garp,bridge llc 14552 3 stp,garp,bridge root@kvm:~# modinfo bridge filename: /lib/modules/3.13.0-107-generic/kernel/net/bridge/bridge.ko alias: rtnl-link-bridge version: 2.3 license: GPL srcversion: 49D4B615F0B11CA696D8623 depends: stp,llc intree: Y vermagic: 3.13.0-107-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: E1:07:B2:8D:F0:77:39:2F:D6:2D:FD:D7:92:BF:3B:1D:BD:57:0C:D8 sig_hashalgo: sha512 root@kvm:~# The bridge-utils package, that provides the tool to create and manipulate the Linux bridge. The ability to create new KVM guests using libvirt, or the QEMU utilities How to do it... Install the Linux bridge package, if it is not already present: root@kvm:~# apt install bridge-utils Build a new KVM instance using the raw image: root@kvm:~# virt-install --name kvm1 --ram 1024 --disk path=/tmp/debian.img,format=raw --graphics vnc,listen=146.20.141.158 --noautoconsole --hvm --import Starting install... Creating domain... | 0 B 00:00 Domain creation completed. You can restart your domain by running: virsh --connect qemu:///system start kvm1 root@kvm:~# List all available bridge devices: root@kvm:~# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.fe5400559bd6 yes vnet0 root@kvm:~# Bring the virtual bridge down, delete it and ensure it's been deleted: root@kvm:~# ifconfig virbr0 down root@kvm:~# brctl delbr virbr0 root@kvm:~# brctl show bridge name bridge id STP enabled interfaces root@kvm:~# Create a new bridge and bring it up: root@kvm:~# brctl addbr virbr0 root@kvm:~# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 no root@kvm:~# ifconfig virbr0 up root@kvm:~# Assign an IP address to the bridge: root@kvm:~# ip addr add 192.168.122.1 dev virbr0 root@kvm:~# ip addr show virbr0 39: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 32:7d:3f:80:d7:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/32 scope global virbr0 valid_lft forever preferred_lft forever inet6 fe80::307d:3fff:fe80:d7c6/64 scope link valid_lft forever preferred_lft forever root@kvm:~# List the virtual interfaces on the host OS: root@kvm:~# ip a s | grep vnet 38: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 500 root@kvm:~# Add the virtual interface vnet0 to the bridge: root@kvm:~# brctl addif virbr0 vnet0 root@kvm:~# brctl show virbr0 bridge name bridge id STP enabled interfaces virbr0 8000.fe5400559bd6 no vnet0 root@kvm:~# Enable the Spanning Tree Protocol (STP) on the bridge and obtain more information: root@kvm:~# brctl stp virbr0 on root@kvm:~# brctl showstp virbr0 virbr0 bridge id 8000.fe5400559bd6 designated root 8000.fe5400559bd6 root port 0 path cost 0 max age 20.00 bridge max age 20.00 hello time 2.00 bridge hello time 2.00 forward delay 15.00 bridge forward delay 15.00 ageing time 300.00 hello timer 0.26 tcn timer 0.00 topology change timer 0.00 gc timer 90.89 flags vnet0 (1) port id 8001 state forwarding designated root 8000.fe5400559bd6 path cost 100 designated bridge 8000.fe5400559bd6 message age timer 0.00 designated port 8001 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags root@kvm:~# Test connectivity from the KVM instance to the host OS: root@kvm:~# virsh console kvm1 Connected to domain kvm1 Escape character is ^] root@debian:~# ip a s eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:55:9b:d6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.92/24 brd 192.168.122.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe55:9bd6/64 scope link valid_lft forever preferred_lft forever root@debian:~# root@debian:~# ping 192.168.122.1 -c 3 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.276 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.226 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.259 ms --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.226/0.253/0.276/0.027 ms root@debian:~# How it works... When we first installed and started the libvirt daemon, few things happened automatically: A new Linux bridge was created with the name and IP address defined in the /etc/libvirt/qemu/networks/default.xml configuration file. The dnsmasq service was started with a configuration specified in the /var/lib/libvirt/dnsmasq/default.conf file. Lets examine the default libvirt bridge configuration: root@kvm:~# cat /etc/libvirt/qemu/networks/default.xml <network> <name>default</name> <bridge name="virbr0"/> <forward/> <ip address="192.168.122.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.122.2" end="192.168.122.254"/> </dhcp> </ip> </network> root@kvm:~# This is the default network that libvirt created for us, specifying the bridge name, IP address and the IP range used by the DHCP server that was started. We are going to talk about libvirt networking in much more details later in this article, however we are showing it here to help you understand where all the IP addresses and the bridge name came from.We can see that a DHCP server is running on the host OS and its configuration file, by running: root@kvm:~# pgrep -lfa dnsmasq 38983 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf root@kvm:~# cat /var/lib/libvirt/dnsmasq/default.conf ##WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE ##OVERWRITTEN AND LOST. Changes to this configuration should be made using: ## virsh net-edit default ## or other application using the libvirt API. ## ## dnsmasq conf file created by libvirt strict-order user=libvirt-dnsmasq pid-file=/var/run/libvirt/network/default.pid except-interface=lo bind-dynamic interface=virbr0 dhcp-range=192.168.122.2,192.168.122.254 dhcp-no-override dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases dhcp-lease-max=253 dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts root@kvm:~# From the preceding configuration file, notice how the IP address range for the DHCP service and the name of the virtual bridge match what is configured in the default libvirt network file that we just saw. With all this in mind let's step through all the actions we performed earlier: In step 1, we installed the userspace tool brctl that we use to create, configure and inspect the Linux bridge configuration in the Linux kernel. In step 2, we provisioned a new KVM instance using a custom raw image containing the guest OS. In step 3, we invoked the bridge utility to list all available bridge devices. From the output we can observe that currently there's one bridge, named virbr0, which libvirt created automatically. Notice that under the interfaces column we can see the vnet0 interface. This is the virtual NIC that was exposed to the host OS, when we started the KVM instance. This means that the virtual machine is connected to the host bridge. In step 4, we first bring the bridge down in order to delete it, then we use the brctl command again to remove the bridge and ensure it's not present on the host OS. In step 5, we recreated the bridge and brought it back up. We do this to demonstrate the steps required to create a new bridge. In step 6, we re-assigned the same IP address to the bridge and listed it. In steps 7 and 8 we list all virtual interfaces on the host OS. Since we only have one KVM guest currently running on the server, we only see one virtual interface - vnet0. We then proceed to add/connect the virtual NIC to the bridge. In step 9, we enabled the Spanning Tree Protocol (STP) on the bridge. STP is a layer 2 protocol that helps prevent network loops if we have redundant network paths. This is especially useful in larger, more complex network topologies, where multiple bridges are connected together. Finally, in step 10, we connect to the KVM guest using the console, list its interface configuration and ensure we can ping the bridge on the host OS. Notice that the IP address of the guest OS was automatically assigned by the dnsmasq server running on the host, as it is a part of the IP range defined in the configuration file we saw earlier. Summary We have covered the Linux bridge in detail here. We have deployed three different network types, explored the network XML format and have seen examples on how to define and manipulate virtual interfaces for the KVM instances. Resources for Article: Further resources on this subject: A Virtual Machine for a Virtual World [article] Getting Started with Ansible [article] Supporting hypervisors by OpenNebula [article]
Read more
  • 0
  • 0
  • 46272

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 39378

article-image-vmware-vsphere-storage-datastores-snapshots
Packt
21 Feb 2018
9 min read
Save for later

VMware vSphere storage, datastores, snapshots

Packt
21 Feb 2018
9 min read
VMware vSphere storage, datastores, snapshotsIn this article, byAbhilash G B, author of the book,VMware vSphere 6.5 CookBook - Third Edition, we will cover the following:Managing VMFS volumes detected as snapshotsCreating NFSv4.1 datastores with Kerberos authenticationEnabling storage I/O control (For more resources related to this topic, see here.) IntroductionStorage is an integral part of any infrastructure. It is used to store the files backing your virtual machines. The most common way to refer to a type of storage presented to a VMware environment is based on the protocol used and the connection type. NFS are storage solutions that can leverage the existing TCP/IP network infrastructure. Hence, they are referred to as IP-based storage. Storage IO Control (SIOC) is one of the mechanisms to use ensure a fair share of storage bandwidth allocation to all Virtual Machines running on shared storage, regardless of the ESXi host the Virtual Machines are running on. Managing VMFS volumes detected as snapshotsSome environments maintain copies of the production LUNs as a backup, by replicating them. These replicas are exact copies of the LUNs that were already presented to the ESXi hosts. If for any reason a replicated LUN is presented to an ESXi host, then the host will not mount the VMFS volume on the LUN. This is a precaution to prevent data corruption. ESXi identifies each VMFS volume using its signature denoted by aUniversally Unique Identifier (UUID). The UUID is generated when the volume is first created or resignatured and is stored in the LVM header of the VMFS volume. When an ESXi host scans for new LUN ;devices and VMFS volumes on it, it compares the physical device ID (NAA ID) of the LUN with the device ID (NAA ID) value stored in the VMFS volumes LVM header. If it finds a mismatch, then it flags the volume as a snapshot volume.Volumes detected as snapshots are not mounted by default. There are two ways to mount such volumes/datastore:Mount by Keeping the Existing Signature Intact - This is used when you are attempting to temporarily mount the snapshot volume on an ESXi that doesn't see the original volume. If you were to attempt mounting the VMFS volume by keeping the existing signature and if the host sees the original volume, then you will not be allowed to mount the volume and will be warned about the presence of another VMFS volume with the same UUID:Mount by generating a new VMFS Signature - This has to be used if you are mounting a clone or a snapshot of an existing VMFS datastore to the same host/s. The process of assigning a new signature will not only update the LVM header with the newly generated UUID, but all the Physical Device ID (NAA ID) of the snapshot LUN. Here, the VMFS volume/datastore will be renamed by prefixing the wordsnap followed by a random number and the name of the original datastore: Getting readyMake sure that the original datastore and its LUN is no longer seen by the ESXi host the snapshot is being mounted to. How to do it...The following procedure will help mount a VMFS volume from a LUN detected as a snapshot:Log in to the vCenter Server using the vSphere Web Client and use the key combination Ctrl+Alt+2 to switch to the Host and Clusters view.Right click on the ESXi host the snapshot LUN is mapped to and go to Storage | New Datastore.On the New Datastore wizard, select VMFS as the filesystem type and click Next to continue.On the Name and Device selection screen, select the LUN detected as a snaphsot and click Next to continue:On the Mount Option screen, choose to either mount by assigning a new signature or by keeping the existing signature, and click Next to continue:On the Ready to Complete screen, review the setting and click Finish to initiate the operation. Creating NFSv4.1 datastores with Kerberos authenticationVMware introduced support for NFS 4.1 with vSphere 6.0. The vSphere 6.5 added several enhancements:It now supports AES encryptionSupport for IP version 6Support Kerberos's integrity checking mechanismHere, we will learn how to create NFS 4.1 datastores. Although the procedure is similar to NFSv3, there are a few additional steps that needs to be performed. Getting readyFor Kerberos authentication to work, you need to make sure that the ESXi hosts and the NFS Server is joined to the Active Directory domainCreate a new or select an existing AD user for NFS Kerberos authenticationConfigure the NFS Server/Share to Allow access to the AD user chosen for NFS Kerberos authentication How to do it...The following procedure will help you mount an NFS datasture using the NFSv4.1 client with Kerberos authentication enabled:Log in to the vCenter Server using the vSphere Web Client and use the key combination Ctrl+Alt+2 to switch to the Host and Clusters view, select the desired ESXi host and navigate to it  Configure | System | Authentication Services section and supply the credentials of the Active Directory user that was chosen for NFS Kerberon Authentication:Right-click on the desired ESXi host and go to Storage | New Datastore to bring-up the Add Storage wizard.On the New Datastore wizard, select the Type as NFS and click Next to continue.On the Select NFS version screen, select NFS 4.1 and click Next to continue. Keep in mind that it is not recommended to mount an NFS Export using both NFS3 and NFS4.1 client. On the Name and Configuration screen, supply a Name for the Datastore, the NFS export's folder path and NFS Server's IP Address or FQDN. You can also choose to mount the share as ready-only if desired:On the Configure Kerberos Authentication screen, check the Enable Kerberos-based authentication box and choose the type of authentication required and click Next to continue:On the Ready to Complete screen review the settings and click Finish to mount the NFS export. Enabling storage I/O controlThe use of disk shares will work just fine as long as the datastore is seen by a single ESXi host. Unfortunately, that is not a common case. Datastores are often shared among multiple ESXi hosts. When datastores are shared, you bring in more than one local host scheduler into the process of balancing the I/O among the virtual machines. However, these lost host schedules cannot talk to each other and their visibility is limited to the ESXi hosts they are running on. This easily contributes to a serious problem called thenoisy neighbor situation. The job of SIOC is to enable some form of communication between local host schedulers so that I/O can be balanced between virtual machines running on separate hosts.  How to do it...The following procedure will help you enable SIOC on a datastore:Connect to the vCenter Server using the Web Client and switch to the Storage view using the key combination Ctrl+Alt+4.Right-click on the desired datastore and go to Configure Storage I/O Control:On the Configure Storage I/O Control window, select the checkbox Enable Storage I/O Control, set a custom congestion threshold (only if needed) and click OK to confirm the settings: With the Virtual Machine selected from the inventory, navigate to its Configure | General tab and review its datastore capability settings to ensure that SIOC is enabled:  How it works...As mentioned earlier, SIOC enables communication between these local host schedulers so that I/O can be balanced between virtual machines running on separate hosts. It does so by maintaining a shared file in the datastore that all hosts can read/write/update. When SIOC is enabled on a datastore, it starts monitoring the device latency on the LUN backing the datastore. If the latency crosses the threshold, it throttles the LUN's queue depth on each of the ESXi hosts in an attempt to distribute a fair share of access to the LUN for all the Virtual Machines issuing the I/O.The local scheduler on each of the ESXi hosts maintains an iostats file to keep its companion hosts aware of the device I/O statistics observed on the LUN. The file is placed in a directory (naa.xxxxxxxxx) on the same datastore.For example, if there are six virtual machines running on three different ESXi hosts, accessing a shared LUN. Among the six VMs, four of them have a normal share value of 1000 and the remaining two have high (2000) disk share value sets on them. These virtual machines have only a single VMDK attached to them. VM-C on host ESX-02 is issuing a large number of I/O operations. Since that is the only VM accessing the shared LUN from that host, it gets the entire queue's bandwidth. This can induce latency on the I/O operations performed by the other VMs: ESX-01 and ESX-03. If the SIOC detects the latency value to be greater than the dynamic threshold, then it will start throttling the queue depth: The throttled DQLEN for a VM is calculated as follows:DQLEN for the VM = (VM's Percent of Shares) of (Queue Depth)Example: 12.5 % of 64 → (12.5 * 64)/100 = 8The throttled DQLEN per host is calculated as follows:DQLEN of the Host = Sum of the DQLEN of the VMs on itExample: VM-A (8) + VM-B(16) = 24The following diagram shows the effect of SIOC throttling the queue depth: SummaryIn this article we learnt, how to mount a VMFS volume from a LUN detected as a snapshot, how to mount an NFS datasture using the NFSv4.1 client with Kerberos authentication enabled, and how to enable SIOC on a datastore. Resources for Article:   Further resources on this subject: Essentials of VMware vSphere [article] Working with VMware Infrastructure [article] Network Virtualization and vSphere [article]
Read more
  • 0
  • 0
  • 37975

article-image-installation-oracle-vm-virtualbox-linux
Packt
09 Dec 2013
4 min read
Save for later

Installation of Oracle VM VirtualBox on Linux

Packt
09 Dec 2013
4 min read
(For more resources related to this topic, see here.) Basic requirements The following are the basic requirements for VirtualBox: Processor: Any recent AMD/Intel processor is fine to run VirtualBox. Memory: This is dependent on the size of the RAM that is required for the host OS, plus the amount of RAM needed by the guest OS. In my test environment, I am using 4 GB RAM and going to run Oracle Enterprise Linux 6.2 as the guest OS. Hard disk space: VirtualBox doesn’t need much space, but you need to plan out how much the host OS needs and how much you need for the guest OS. Host OS: You need to make sure that the host OS is certified to run VirtualBox. Host OS At the time of writing this article, VirtualBox runs on the following host operating systems: Windows The following Microsoft Windows operating systems are compatible to run as host OS for VirtualBox: Windows XP, all service packs (32-bit) Windows Server 2003 (32-bit) Windows Vista (32-bit and 64-bit) Windows Server 2008 (32-bit and 64-bit) Windows 7 (32-bit and 64-bit) Windows 8 (32-bit and 64-bit) Windows Server 2012 (64-bit) Mac OS X The following Mac OS X operating systems are compatible to run as host OS for VirtualBox: 10.6 (Snow Leopard, 32-bit and 64-bit) 10.7 (Lion, 32-bit and 64-bit) 10.8 (Mountain Lion, 64-bit) Linux The following Linux operating systems are compatible to run as host OS for VirtualBox: Debian GNU/Linux 5.0 (lenny) and 6.0 (squeeze) Oracle Enterprise Linux 4 and 5, Oracle Linux 6 Red Hat Enterprise Linux 4, 5, and 6 Fedora Core 4 to 17 Gentoo Linux openSUSE 11.0, 11.1, 11.2, 11.3, 11.4, 12.1, and 12.2 Mandriva 2010 and 2011 Solaris Both 32-bit and 64-bit versions of Solaris are supported with some limitations. Please refer to www.virtualbox.org for more information. The following host OS are supported: Solaris 11 including Solaris 11 Express Solaris 10 (update 8 and higher) Guest OS You need to make sure that the guest OSis certified to run on VirtualBox. VirtualBox supports the following guest operating systems: Windows 3.x Windows NT 4.0 Windows 2000 Windows XP Windows Server 2003 Windows Vista Windows 7 DOS Linux (2.4, 2.6, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, and 3.7) Solaris OpenSolaris OpenBSD Installation I have downloaded and used the VirtualBox repository to install VirtualBox. Please ensure the desktop or laptop you are installing VirtualBox on is connected to the Internet. However, this is not mandatory. You can install VirtualBox using the RPM command, which generally adds the complexity of finding and installing the dependency packages, or you can have a central yum server where you can add VirtualBox repository. In my test lab, my laptop is connected to the Internet and I have used the wget command to download and add virtualbox.repository. Installing dependency packages Please ensure installing the packages seen in the following screenshot to make VirtualBox work: Installing VirtualBox 4.2.16 Once the preceding dependency is installed, we are ready to install VirtualBox 4.2.16. Use the command seen in the following screenshot to install VirtualBox: Rebuilding kernel modules The vboxdrv module is a special kernel module that helps to allocate the physical memory and to gain control of the processor for the guest system execution. Without this kernel module, you can still use the VirtualBox manager to configure the virtual machines, but they will not start. When you install VirtualBox by default, this module gets installed on the system. But to maintain future kernel updates, I suggest that you install Dynamic Kernel Module Support (DKMS). In most of the cases, this module installation is straightforward. You can use yum, apt-get, and so on depending on the Linux variant you are using, but ensure that the GNU compiler (GCC), GNU make (make), and packages containing header files for your kernel are installed prior to installing DKMS. Also, ensure that all system updates are installed. Once the kernel of your Linux host is updated and DKMS is not installed, the kernel module needs to be reinstalled by executing the following command as root: The preceding command not only rebuilds the kernel modules but also automatically creates the vboxusers group and the VirtualBox user. If you use Microsoft Windows on your desktop, then download the .exe file, install it, and start it from the desktop shortcut or from the program. Start VirtualBox Use the command seen in the following screenshot in Linux to run VirtualBox: If everything is fine, then the Oracle VM VirtualBox Manager screen appears as seen in the following screenshot: Update VirtualBox You can update VirtualBox with the help of the command seen in the following screenshot: Remove VirtualBox To remove VirtualBox, use the command seen in the following screenshot: Summary In this article we have covered the installation, update, and removal of Oracle VM VirtualBox in the Linux environment. Resources for Article: Further resources on this subject: Oracle VM Management [Article] Extending Oracle VM Management [Article] Troubleshooting and Gotchas in Oracle VM Manager 2.1.2 [Article]
Read more
  • 0
  • 0
  • 37827
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-configuration-guide
Packt
08 Nov 2016
13 min read
Save for later

A Configuration Guide

Packt
08 Nov 2016
13 min read
In this article by Dimitri Aivaliotis, author Mastering NGINX - Second Edition, The NGINX configuration file follows a very logical format. Learning this format and how to use each section is one of the building blocks that will help you to create a configuration file by hand. Constructing a configuration involves specifying global parameters as well as directives for each individual section. These directives and how they fit into the overall configuration file is the main subject of this article. The goal is to understand how to create the right configuration file to meet your needs. (For more resources related to this topic, see here.) The basic configuration format The basic NGINX configuration file is set up in a number of sections. Each section is delineated as shown: <section> { <directive> <parameters>; } It is important to note that each directive line ends with a semicolon (;). This marks the end of line. The curly braces ({}) actually denote a new configuration context, but we will read these as sections for the most part. The NGINX global configuration parameters The global section is used to configure the parameters that affect the entire server and is an exception to the format shown in the preceding section. The global section may include configuration directives, such as user and worker_processes, as well as sections, such as events. There are no open and closing braces ({}) surrounding the global section. The most important configuration directives in the global context are shown in the following table. These configuration directives will be the ones that you will be dealing with for the most part. Global configuration directives Explanation user The user and group under which the worker processes run is configured using this parameter. If the group is omitted, a group name equal to that of the user is used. worker_processes This directive shows the number of worker processes that will be started. These processes will handle all the connections made by the clients. Choosing the right number depends on the server environment, the disk subsystem, and the network infrastructure. A good rule of thumb is to set this equal to the number of processor cores for CPU-bound loads and to multiply this number by 1.5 to 2 for the I/O bound loads. error_log This directive is where all the errors are written. If no other error_log is given in a separate context, this log file will be used for all errors, globally. A second parameter to this directive indicates the level at which (debug, info, notice, warn, error, crit, alert, and emerg) errors are written in the log. Note that the debug-level errors are only available if the --with-debug configuration switch is given at compilation time. pid This directive is the file where the process ID of the main process is written, overwriting the compiled-in default. use This directive indicates the connection processing method that should be used. This will overwrite the compiled-in default and must be contained in an events context, if used. It will not normally need to be overridden, except when the compiled-in default is found to produce errors over time. worker_connections This directive configures the maximum number of simultaneous connections that a worker process may have opened. This includes, but is not limited to, client connections and connections to upstream servers. This is especially important on reverse proxy servers—some additional tuning may be required at the operating system level in order to reach this number of simultaneous connections. Here is a small example using each of these directives: # we want nginx to run as user 'www' user www; # the load is CPU-bound and we have 12 cores worker_processes 12; # explicitly specifying the path to the mandatory error log error_log /var/log/nginx/error.log; # also explicitly specifying the path to the pid file pid /var/run/nginx.pid; # sets up a new configuration context for the 'events' module events { # we're on a Solaris-based system and have determined that nginx # will stop responding to new requests over time with the default # connection-processing mechanism, so we switch to the second-best use /dev/poll; # the product of this number and the number of worker_processes # indicates how many simultaneous connections per IP:port pair are # accepted worker_connections 2048; } This section will be placed at the top of the nginx.conf configuration file. Using the include files The include files can be used anywhere in your configuration file, to help it be more readable and to enable you to reuse parts of your configuration. To use them, make sure that the files themselves contain the syntactically correct NGINX configuration directives and blocks; then specify a path to those files: include /opt/local/etc/nginx/mime.types; A wildcard may appear in the path to match multiple files: include /opt/local/etc/nginx/vhost/*.conf; If the full path is not given, NGINX will search relative to its main configuration file. A configuration file can easily be tested by calling NGINX as follows: nginx -t -c <path-to-nginx.conf> This command will test the configuration, including all files separated out into the include files, for syntax errors. Sample configuration The following code is an example of an HTTP configuration section: http { include /opt/local/etc/nginx/mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; server_names_hash_max_size 1024; } This context block would go after any global configuration directives in the nginx.conf file. The virtual server section Any context beginning with the keyword server is considered as a virtual server section. It describes a logical separation of a set of resources that will be delivered under a different server_name directive. These virtual servers respond to the HTTP requests, and are contained within the http section. A virtual server is defined by a combination of the listen and server_name directives. The listen directive defines an IP address/port combination or path to a UNIX-domain socket: listen address[:port]; listen port; listen unix:path; The listen directive uniquely identifies a socket binding under NGINX. There are a number of optional parameters that listen can take: The listen parameters Explanation Comments default_server This parameter defines this address/port combination as being the default value for the requests bound here.   setfib This parameter sets the corresponding FIB for the listening socket. This parameter is only supported on FreeBSD and not for UNIX-domain sockets. backlog This parameter sets the backlog parameter in the listen() call. This parameter defaults to -1 on FreeBSD and 511 on all other platforms. rcvbuf This parameter sets the SO_RCVBUF parameter on the listening socket.   sndbuf This parameter sets the SO_SNDBUF parameter on the listening socket.   accept_filter This parameter sets the name of the accept filter to either dataready or httpready. This parameter is only supported on FreeBSD. deferred This parameter sets the TCP_DEFER_ACCEPT option to use a deferred accept() call. This parameter is only supported on Linux. bind This parameter makes a separate bind() call for this address/port pair. A separate bind() call will be made implicitly if any of the other socket-specific parameters are used. ipv6only This parameter sets the value of the IPV6_ONLY parameter. This parameter can only be set on a fresh start and not for UNIX-domain sockets. ssl This parameter indicates that only the HTTPS connections will be made on this port. This parameter allows for a more compact configuration. so_keepalive This parameter configures the TCP keepalive connection for the listening socket.   The server_name directive is fairly straightforward and it can be used to solve a number of configuration problems. Its default value is "", which means that a server section without a server_name directive will match a request that has no Host header field set. This can be used, for example, to drop requests that lack this header: server { listen 80; return 444; } The nonstandard HTTP code, 444, used in this example will cause NGINX to immediately close the connection. Besides a normal string, NGINX will accept a wildcard as a parameter to the server_name directive: The wildcard can replace the subdomain part: *.example.com The wildcard can replace the top-level domain part: www.example.* A special form will match the subdomain or the domain itself: .example.com (matches *.example.com as well as example.com) A regular expression can also be used as a parameter to server_name by prepending the name with a tilde (~): server_name ~^www.example.com$; server_name ~^www(d+).example.(com)$; The latter form is an example using captures, which can later be referenced (as $1, $2, and so on) in further configuration directives. NGINX uses the following logic when determining which virtual server should serve a specific request: Match the IP address and port to the listen directive. Match the Host header field against the server_name directive as a string. Match the Host header field against the server_name directive with a wildcard at the beginning of the string. Match the Host header field against the server_name directive with a wildcard at the end of the string. Match the Host header field against the server_name directive as a regular expression. If all the Host headers match fail, direct to the listen directive marked as default_server. If all the Host headers match fail and there is no default_server, direct to the first server with a listen directive that satisfies step 1. This logic is expressed in the following flowchart: The default_server parameter can be used to handle requests that would otherwise go unhandled. It is therefore recommended to always set default_server explicitly so that these unhandled requests will be handled in a defined manner. Besides this usage, default_server may also be helpful in configuring a number of virtual servers with the same listen directive. Any directives set here will be the same for all matching server blocks. Locations – where, when, and how The location directive may be used within a virtual server section and indicates a URI that comes either from the client or from an internal redirect. Locations may be nested with a few exceptions. They are used for processing requests with as specific configuration as possible. A location is defined as follows: location [modifier] uri {...} Or it can be defined for a named location: location @name {…} A named location is only reachable from an internal redirect. It preserves the URI as it was before entering the location block. It may only be defined at the server context level. The modifiers affect the processing of a location in the following way: Location modifiers Handling = This modifier uses exact match and terminate search. ~ This modifier uses case-sensitive regular expression matching. ~* This modifier uses case-insensitive regular expression matching. ^~ This modifier stops processing before regular expressions are checked for a match of this location's string, if it's the most specific match. Note that this is not a regular expression match—its purpose is to preempt regular expression matching. When a request comes in, the URI is checked against the most specific location as follows: Locations without a regular expression are searched for the most-specific match, independent of the order in which they are defined. Regular expressions are matched in the order in which they are found in the configuration file. The regular expression search is terminated on the first match. The most-specific location match is then used for request processing. The comparison match described here is against decoded URIs; for example, a %20 instance in a URI will match against a "" (space) specified in a location. A named location may only be used by internally redirected requests. The following directives are found only within a location: Location-only directives Explanation alias This directive defines another name for the location, as found on the filesystem. If the location is specified with a regular expression, alias should reference captures defined in that regular expression. The alias directive replaces the part of the URI matched by the location such that the rest of the URI not matched will be searched for in that filesystem location. Using the alias directive is fragile when moving bits of the configuration around, so using the root directive is preferred, unless the URI needs to be modified in order to find the file. internal This directive specifies a location that can only be used for internal requests (redirects defined in other directives, rewrite requests, error pages, and so on.) limit_except This directive limits a location to the specified HTTP verb(s) (GET also includes HEAD). Additionally, a number of directives found in the http section may also be specified in a location. Refer to Appendix A, Directive Reference, for a complete list. The try_files directive deserves special mention here. It may also be used in a server context, but will most often be found in a location. The try_files directive will do just that—try files in the order given as parameters; the first match wins. It is often used to match potential files from a variable and then pass processing to a named location, as shown in the following example: location / { try_files $uri $uri/ @mongrel; } location @mongrel { proxy_pass http://appserver; } Here, an implicit directory index is tried if the given URI is not found as a file and then processing is passed on to appserver via a proxy. We will explore how best to use location, try_files, and proxy_pass to solve specific problems throughout the rest of the article. Locations may be nested except in the following situations: When the prefix is = When the location is a named location Best practice dictates that regular expression locations be nested inside the string-based locations. An example of this is as follows: # first, we enter through the root location / { # then we find a most-specific substring # note that this is not a regular expression location ^~ /css { # here is the regular expression that then gets matched location ~* /css/.*.css$ { } } } Summary In this article, we saw how the NGINX configuration file is built. Its modular nature is a reflection, in part, of the modularity of NGINX itself. A global configuration block is responsible for all aspects that affect the running of NGINX as a whole. There is a separate configuration section for each protocol that NGINX is responsible for handling. We may further define how each request is to be handled by specifying servers within those protocol configuration contexts (either http or mail) so that requests are routed to a specific IP address/port. Within the http context, locations are then used to match the URI of the request. These locations may be nested, or otherwise ordered to ensure that requests get routed to the right areas of the filesystem or application server. Resources for Article: Further resources on this subject: Zabbix Configuration [article] Configuring the essential networking services provided by pfSense [article] Configuring a MySQL linked server on SQL Server 2008 [article]
Read more
  • 0
  • 0
  • 37240

article-image-troubleshooting-in-vrealize-operations-components
Vijin Boricha
06 Jun 2018
11 min read
Save for later

Troubleshooting techniques in vRealize Operations components

Vijin Boricha
06 Jun 2018
11 min read
vRealize Operations Manager helps in managing the health, efficiency and compliance of virtualized environments. In this tutorial, we have covered the latest troubleshooting techniques of vRealize Operations. Let's see what we can do to monitor and troubleshoot the key vRealize Operations components and services. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, and Christopher Slater. Troubleshooting Services Let's first take a look into some of the most important vRealize Operations services. The Apache2 service vRealize Operations internally uses the Apache2 web server to host Admin UI, Product UI, and Suite API. Here are some useful service log locations: Log file nameLocationPurposeaccess_log & error_logstorage/log/var/log/apache2 Stores log information for the Apache service The Watchdog service Watchdog is a vRealize Operations service that maintains the necessary daemons/services and attempts to restart them as necessary should there be any failure. The vcops-watchdog is a Python script that runs every 5 minutes by means of the vcops-watchdog-daemon with the purpose of monitoring the various vRealize Operations services, including CaSA. The Watchdog service performs the following checks: PID file of the service Service status Here are some useful service log locations: Log file nameLocationPurposevcops-watchdog.log/usr/lib/vmware-vcops/user/log/vcops-watchdog/Stores log information for the WatchDog service The Collector service The Collector service sends a heartbeat to the controller every 30 seconds. By default, the Collector will wait for 30 minutes for adapters to synchronize. The collector properties, including enabling or disabling Self Protection, can be configured from the collector.properties properties file located in /usr/lib/vmware-vcops/user/conf/collector. Here are some useful service log locations: Log file nameLocationPurposecollector.log/storage/log/vcops/log/Stores log information for the Collector service The Controller service The Controller service is part of the analytics engine. The controller does the decision making on where new objects should reside. The controller manages the storage and retrieval of the inventory of the objects within the system. The Controller service has the following uses: It will monitor the collector status every minute How long a deleted resource is available in the inventory How long a non-existing resource is stored in the database Here are some useful service file locations: Log file nameLocationPurposecontroller.properties/usr/lib/vmware-vcops/user/conf/controller/Stores properties information for the Controller service Databases As we learned, vRealize Operations contains quite a few databases, all of which are of great importance for the function of the product. Let’s take a deeper look into those databases. Cassandra DB Currently, Cassandra DB stores the following information: User Preferences and Config Alerts definition Customizations Dashboards, Policies, and View Reports and Licensing Shard Maps Activities Cassandra stores all the information which we see in the content folder; basically, any settings which are applied globally. You are able to log into the Cassandra database from any Analytic Node. The information is the same across nodes. There are two ways to connect to the Cassandra database: cqlshrc is a command-line tool used to get the data within Cassandra, in a SQL-like fashion (inbuilt). To connect to the DB, run the following from the command line (SSH): $VMWARE_PYTHON_BIN $ALIVE_BASE/cassandra/apache-cassandra-2.1.8/bin/cqlsh --ssl --cqlshrc $ALIVE_BASE/user/conf/cassandra/cqlshrc Once you are connected to the DB, we need to navigate to the globalpersistence key space using the following command: vcops_user@cqlsh&gt; use globalpersistence ; The nodetool command-line tool Once you are logged on to the Cassandra DB, we can run the following commands to see information: Command syntaxPurposeDescribe tablesTo list all the relation (tables) in the current database instanceDescribe &lt;table_name&gt;To list the content of that particular tableExitTo exit the Cassandra command lineselect commandTo select any Column data from a tabledelete commandTo delete any Column data from a table Some of the important tables in Cassandra are: Table namePurposeactivity_2_tblStores all the activitiestbl_2a8b303a3ed03a4ebae2700cbfae90bfStores the Shard mapping information of an object (table name may be differ in each environment)supermetricStores the defined super metricspolicyStores all the defined policiesAuthStores all the user details in the clusterglobal_settingsAll the configured global settings are stored herenamespace_to_classtypeInforms what type of data is stored in what table under CassandrasymptomproblemdefinitionAll the defined symptomscertificatesStores all the adapter and data source certificates The Cassandra database has the following configuration files: File typeLocationCassandra.yaml/usr/lib/vmware-vcops/user/conf/cassandra/vcops_cassandra.properties/usr/lib/vmware-vcops/user/conf/cassandra/Cassandra conf scripts/usr/lib/vmware_vcopssuite/utilities/vmware/vcops/cassandra/ The Cassandra.yaml file stores certain information such as the default location to save data (/storage/db/vcops/cassandra/data). The file contains information about all the nodes. When a new node joins the cluster, it refers to this file to make sure it contacts the right node (master node). It also has all the SSL certificate information. Cassandra is started and stopped via the CaSA service, but just because CaSA is running does not mean that the Cassandra service is necessarily running. The service command to check the status of the Cassandra DB service from the command line (SSH) is: service vmware-vcops status cassandra The Cassandra cassandraservice.Sh is located in: $VCOPS_BASE/cassandra/bin/ Here are some useful Cassandra DB log locations: Log File NameLocationPurposeSystem.log/usr/lib/vmware-vcops/user/log/cassandraStores all the Cassandra-related activitieswatchdog_monitor.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra Watchdog logswrapper.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra start and stop-related logsconfigure.log /usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsinitial_cluster_setup.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsvalidate_cluster.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRops To enable debug logging for the Cassandra database, edit the logback.xml XML file located in: /usr/lib/vmware-vcops/user/conf/cassandra/ Change &lt;root level="INFO"&gt; to &lt;root level="DEBUG"&gt;. The System.log will not show the debug logs for Cassandra. If you experience any of the following issues, this may be an indicator that the database has performance issues, and so you may need to take the appropriate steps to resolve them: Relationship changes are not be reflected immediately Deleted objects still show up in the tree View and Reports takes slow in processing and take a long time to open. Logging on to the Product UI takes a long time for any user Alert was supposed to trigger but it never happened Cassandra can tolerate 5 ms of latency at any given point of time. You can validate the cluster by running the following command from the command line (SSH). First, navigate to the following folder: /usr/lib/vmware-vcopssuite/utilities/ Run the following command to validate the cluster: $VMWARE_PYTHON_BIN -m vmware.vcops.cassandra.validate_cluster &lt;IP_ADDRESS_1 IP_ADDRESS_2&gt; You can also use the nodetool to perform a health check, and possibly resolve database load issues. The command to check the load status of the activity tables in vRealize Operations version 6.3 and older is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool --port 9008 status For the 6.5 release and newer, VMware added the requirement of using a 'maintenanceAdmin' user along with a password file. The new command is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool -p 9008 --ssl -u maintenanceAdmin --password-file /usr/lib/vmware-vcops/user/conf/jmxremote.password status Regardless of which method you choose to perform the health check, if any of the nodes have over 600 MB of load, you should consult with VMware Global Support Services on the next steps to take, and how to elevate the load issues. Central (Repl DB) The Postgres database was introduced in 6.1. It has two instances in version 6.6. The Central Postgres DB, also called repl, and the Alerts/HIS Postgres DB, also called data, are two separate database instances under the database called vcopsdb. The central DB exists only on the master and the master replica nodes when HA is enabled. It is accessible via port 5433 and it is located in /storage/db/vcops/vpostgres/repl. Currently, the database stores the Resources inventory. You can connect to the central DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5433 The service command to start the central DB from the command line (SSH) is: service vpostgres-repl start Here are some useful log locations: Log file nameLocationPurposepostgresql-&lt;xx&gt;.log/storage/db/vcops/vpostgres/data/pg_logProvides information on Postgres database cleanup and other disk-related information Alerts/HIS (Data) DB The Alerts DB is called data on all the data nodes including the master and master replica node. It was again introduced in 6.1. Starting from 6.2, the  Historical Inventory Service xDB was merged with the Alerts DB. It is accessible via port 5432 and it is located in /storage/db/vcops/vpostgres/data. Currently, the database stores: Alerts and alarm history History of resource property data History of resource relationship You can connect to the Alerts DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5432 The service command to start the Alerts DB from the command line (SSH) is: service vpostgres start FSDB The File System Database (FSDB) contains all raw time series metrics and super metrics data for the discovered resources. What is FSDB in vRealize Operations Manager?: FSDB is a GemFire server and runs inside analytics JVM. FSDB in vRealize Operations uses the Sharding Manager to distribute data between nodes (new objects). (We will discuss what vRealize Operations cluster nodes are later in this chapter.) The File System Database is available in all the nodes of a vRops Cluster deployment. It has its own properties file. FSDB stores data (time series data ) collected by adapters and data which is generated/calculated (system, super, badge, CIQ metrics, and so on) based on analysis of that data. If you are troubleshooting FSDB performance issues, you should start from the Self Performance Details dashboard, more precisely, the FSDB Data Processing widget. We covered both of these earlier in this chapter. You can also take a look at the metrics provided by the FSDB Metric Picker: You can access it by navigating to the Environment tab, vRealize Operations Clusters, selecting a node, and selecting vRealize Operations Manager FSDB. Then, select the All Metrics tab. You can check the synchronization state of the FSDB to determine the overall health of the cluster by running the following command from the command line (SSH): $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getShardStateMappingInfo By restarting the FSDB, you can trigger synchronization of all the data by getting missing data from other FSDBs. Synchronization takes place only when you have vRealize Operations HA configured. Here are some useful log locations for FSDB: Log file nameLocaitonPurposeAnalytics-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/log Used to check the Sharding Module functionality Can trace which objects have synced ShardingManager_&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logCan be used to get the total time the sync tookfsdb-accessor-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logProvides information on FSDB database cleanup and other disk-related information Platform-cli Platform-cli is a tool by which we can get information from various databases, including the GemFire cache, Cassandra, and the Alerts/HIS persistence databases. In order to run this Python script, you need to run the following command: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py The following example of using this command will list all the resources in ascending order and also show you which shard it is stored on: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getResourceToShardMapping We discussed how to troubleshoot some of the most important components like services and databases along with troubleshooting failures in the upgrade process. To know more about self-monitoring dashboards and infrastructure compliance, check out this book Mastering vRealize Operations Manager - Second Edition. What to expect from vSphere 6.7 Introduction to SDN – Transformation from legacy to SDN Learning Basic PowerCLI Concepts
Read more
  • 0
  • 0
  • 37112

article-image-proxmox-ve-fundamentals
Packt
04 Apr 2016
12 min read
Save for later

Proxmox VE Fundamentals

Packt
04 Apr 2016
12 min read
In this article written by Rik Goldman author of the book Learning Proxmox VE, we introduce to you Proxmox Virtual Environment (PVE) which is a mature, complete, well-supported, enterprise-class virtualization environment for servers. It is an open source tool—based in the Debian GNU/Linux distribution—that manages containers, virtual machines, storage, virtualized networks, and high-availability clustering through a well-designed, web-based interface or via the command-line interface. (For more resources related to this topic, see here.) Developers provided the first stable release of Proxmox VE in 2008; 4 years and eight point releases later, ZDNet's Ken Hess boldly, but quite sensibly, declared Proxmox VE as Proxmox: The Ultimate Hypervisor (http://www.zdnet.com/article/proxmox-the-ultimate-hypervisor/). Four years later, PVE is on version 4.1, in use by at least 90,000 hosts, and more than 500 commercial customers in 140 countries; the web-based administrative interface itself is translated into nineteen languages. This article will explore the fundamental technologies underlying PVE's hypervisor features: LXC, KVM, and QEMU. To do so, we will develop a working understanding of virtual machines, containers, and their appropriate use. We will cover the following topics: Proxmox VE in brief Virtualization and containerization with PVE Proxmox VE virtual machines, KVM, and QEMU Containerization with PVE and LXC Proxmox VE in brief With Proxmox VE, Proxmox Server Solutions GmbH (https://www.proxmox.com/en/about) provides us with an enterprise-ready, open source type II hypervisor. Later, you'll find some of the features that make Proxmox VE such a strong enterprise candidate. The license for Proxmox VE is very deliberately the GNU Affero General Public License (V3) (https://www.gnu.org/licenses/agpl-3.0.html). From among the many free and open source compatible licenses available, this is a significant choice because it is "specifically designed to ensure cooperation with the community in the case of network server software." PVE is primarily administered from an integrated web interface or from the command line locally or via SSH. Consequently, there is no need for a separate management server and the associated expenditure. In this way, Proxmox VE significantly contrasts with alternative enterprise virtualization solutions by vendors such as VMware. Proxmox VE instances/nodes can be incorporated into PVE clusters, and centrally administered from a unified web interface. Proxmox VE provides for live migration—the movement of a virtual machine or container from one cluster node to another without any disruption of services. This is a rather unique feature to PVE and not common to competing products. Features Proxmox VE VMware vSphere Hardware requirements Flexible Strict compliance with HCL Integrated management interface Web- and shell-based (browser and SSH) No. Requires dedicated management server at additional cost Simple subscription structure Yes; based on number of premium support tickets per year and CPU socket count No High availability Yes Yes VM live migration Yes Yes Supports containers Yes No Virtual machine OS support Windows and Linux Windows, Linux, and Unix Community support Yes No Live VM snapshots Yes Yes Contrasting Proxmox VE and VMware vSphere features For a complete catalog of features, see the Proxmox VE datasheet at https://www.proxmox.com/images/download/pve/docs/Proxmox-VE-Datasheet.pdf. Like its competitors, PVE is a hypervisor: a typical hypervisor is software that creates, runs, configures, and manages virtual machines based on an administrator or engineer's choices. PVE is known as a type II hypervisor because the virtualization layer is built upon an operating system. As a type II hypervisor, Proxmox VE is built on the Debian project. Debian is a GNU/Linux distribution renowned for its reliability, commitment to security, and its thriving and dedicated community of contributing developers. A type II hypervisor, such as PVE, runs directly over the operating system. In Proxmox VE's case, the operating system is Debian; since the release of PVE 4.0, the underlying operating system has been Debian "Jessie." By contrast, a type I hypervisor (such as VMware's ESXi) runs directly on bare metal without the mediation of an operating system. It has no additional function beyond managing virtualization and the physical hardware. A type I hypervisor runs directly on hardware, without the mediation of an operating system. As a type II hypervisor, Proxmox VE is built on the Debian project. Debian is a GNU/Linux distribution renowned for its reliability, commitment to security, and its thriving and dedicated community of contributing developers. Debian-based GNU/Linux distributions are arguably the most popular GNU/Linux distributions for the desktop. One characteristic that distinguishes Debian from competing distribution is its release policy: Debian releases only when its development community can stand behind it for its stability, security, and usability. Debian does not distinguish between long-term support releases and regular releases as do some other distributions. Instead, all Debian releases receive strong support and critical updates through the first year following the next release. (Since 2007, a major release of Debian has been made about every two years. Debian 8, Jessie, was released just about on schedule in 2015. Proxmox VE's reliance on Debian is thus a testament to its commitment to these values: stability, security, and usability over scheduled releases that favor cutting-edge features. PVE provides its virtualization functionality through three open technologies, and the efficiency with which they're integrated by its administrative web interface: LXC KVM QEMU To understand how this foundation serves Proxmox VE, we must first be able to clearly understand the relationship between virtualization (or, specifically, hardware virtualization) and containerization (OS virtualization). As we proceed, their respective use cases should become clear. Virtualization and containerization with Proxmox VE It is correct to ultimately understand containerization as a type of virtualization. However, here, we'll look first to conceptually distinguish a virtual machine from a container by focusing on contrasting characteristics. Simply put, virtualization is a technique through which we provide fully-functional, computing resources without a demand for resources' physical organization, locations, or relative proximity. Briefly put, virtualization technology allows you to share and allocate the resources of a physical computer into multiple execution environments. Without context, virtualization is a vague term, encapsulating the abstraction of such resources as storage, networks, servers, desktop environments, and even applications from their concrete hardware requirements through software implementation solutions called hypervisors. Virtualization thus affords us more flexibility, more functionality, and a significant positive impact on our budgets—often realized with merely the resources we have at hand. In terms of PVE, virtualization most commonly refers to the abstraction of all aspects of a discrete computing system from its hardware. In this context, virtualization is the creation, in other words, of a virtual machine or VM, with its own operating system and applications. A VM may be initially understood as a computer that has the same functionality as a physical machine. Likewise, it may be incorporated and communicated with via a network exactly as a machine with physical hardware would. Put yet another way, from inside a VM, we will experience no difference from which we can distinguish it from a physical computer. The virtual machine, moreover, hasn't the physical footprint of its physical counterparts. The hardware it relies on is, in fact, provided by software that borrows from the hardware resources from a host installed on a physical machine (or bare metal). Nevertheless, the software components of the virtual machine, from the applications to the operating system, are distinctly separated from those of the host machine. This advantage is realized when it comes to allocating physical space for resources. For example, we may have a PVE server running a web server, database server, firewall, and log management system—all as discrete virtual machines. Rather than consuming the physical space, resources, and labor of maintaining four physical machines, we simply make physical room for the single Proxmox VE server and configure an appropriate virtual LAN as necessary. In a white paper entitled Putting Server Virtualization to Work, AMD articulates well the benefits of virtualization to businesses and developers (https://www.amd.com/Documents/32951B_Virtual_WP.pdf): Top 5 business benefits of virtualization: Increases server utilization Improves service levels Streamlines manageability and security Decreases hardware costs Reduces facility costs The benefits of virtualization with a development and test environment: Lowers capital and space requirements. Lowers power and cooling costs Increases efficiencies through shorter test cycles Faster time-to-market To these benefits, let's add portability and encapsulation: the unique ability to migrate a live VM from one PVE host to another—without suffering a service outage. Proxmox VE makes the creation and control of virtual machines possible through the combined use of two free and open source technologies: Kernel-based Virtual Machine (or KVM) and Quick Emulator (QEMU). Used together, we refer to this integration of tools as KVM-QEMU. KVM KVM has been an integral part of the Linux kernel since February, 2007. This kernel module allows GNU/Linux users and administrators to take advantage of an architecture's hardware virtualization extensions; for our purposes, these extensions are AMD's AMD-V and Intel's VT-X for the x86_64 architecture. To really make the most of Proxmox VE's feature set, you'll therefore very much want to install on an x86_64 machine with a CPU with integrated virtualization extensions. For a full list of AMD and Intel processors supported by KVM, visit Intel at http://ark.intel.com/Products/VirtualizationTechnology or AMD at http://support.amd.com/en-us/kb-articles/Pages/GPU120AMDRVICPUsHyperVWin8.aspx. QEMU QEMU provides an emulation and virtualization interface that can be scripted or otherwise controlled by a user. Visualizing the relationship between KVM and QEMU Without Proxmox VE, we could essentially define the hardware, create a virtual disk, and start and stop a virtualized server from the command line using QEMU. Alternatively, we could rely on any one of an array of GUI frontends for QEMU (a list of GUIs available for various platforms can be found at http://wiki.qemu.org/Links#GUI_Front_Ends). Of course, working with these solutions is productive only if you're interested in what goes on behind the scenes in PVE when virtual machines are defined. Proxmox VE's management of virtual machines is itself managing QEMU through its API. Managing QEMU from the command line can be tedious. The following is a line from a script that launched Raspbian, a Debian remix intended for the architecture of the Raspberry Pi, on an x86 Intel machine running Ubuntu. When we see how easy it is to manage VMs from Proxmox VE's administrative interfaces, we'll sincerely appreciate that relative simplicity: qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda ./$raspbian_img -hdb swap If you're familiar with QEMU's emulation features, it's perhaps important to note that we can't manage emulation through the tools and features Proxmox VE provides—despite its reliance on QEMU. From a bash shell provided by Debian, it's possible. However, the emulation can't be controlled through PVE's administration and management interfaces. Containerization with Proxmox VE Containers are a class of virtual machines (as containerization has enjoyed a renaissance since 2005, the term OS virtualization has become synonymous with containerization and is often used for clarity). However, by way of contrast with VMs, containers share operating system components, such as libraries and binaries, with the host operating system; a virtual machine does not. Visually contrasting virtual machines with containers The container advantage This arrangement potentially allows a container to run leaner and with fewer hardware resources borrowed from the host. For many authors, pundits, and users, containers also offer a demonstrable advantage in terms of speed and efficiency. (However, it should be noted here that as resources such as RAM and more powerful CPUs become cheaper, this advantage will diminish.) The Proxmox VE container is made possible through LXC from version 4.0 (it's made possible through OpenVZ in previous PVE versions). LXC is the third fundamental technology serving Proxmox VE's ultimate interest. Like KVM and QEMU, LXC (or Linux Containers) is an open source technology. It allows a host to run, and an administrator to manage, multiple operating system instances as isolated containers on a single physical host. Conceptually then, a container very clearly represents a class of virtualization, rather than an opposing concept. Nevertheless, it's helpful to maintain a clear distinction between a virtual machine and a container as we come to terms with PVE. The ideal implementation of a Proxmox VE guest is contingent on our distinguishing and choosing between a virtual-machine solution and a container solution. Since Proxmox VE containers share components of the host operating system and can offer advantages in terms of efficiency, this text will guide you through the creation of containers whenever the intended guest can be fully realized with Debian Jessie as our hypervisor's operating system without sacrificing features. When our intent is a guest running a Microsoft Windows operating system, for example, a Proxmox VE container ceases to be a solution. In such a case, we turn, instead, to creating a virtual machine. We must rely on a VM precisely because the operating system components that Debian can share with a Linux container are not components a Microsoft Windows operating system can make use of. Summary In this article, we have come to terms with the three open source technologies that provide Proxmox VE's foundational features: containerization and virtualization with LXC, KVM, and QEMU. Along the way, we've come to understand that containers, while being a type of virtualization, have characteristics that distinguish them from virtual machines. These differences will be crucial as we determine which technology to rely on for a virtual server solution with Proxmox VE. Resources for Article: Further resources on this subject: Deploying App-V 5 in a Virtual Environment[article] Setting Up a Spark Virtual Environment[article] Basic Concepts of Proxmox Virtual Environment[article]
Read more
  • 0
  • 0
  • 32251

article-image-cloning-and-snapshots-vmware-workstation
Packt
26 Aug 2013
10 min read
Save for later

Cloning and Snapshots in VMware Workstation

Packt
26 Aug 2013
10 min read
In this article by Sander van Vugt, author of VMware Workstation – No Experience Necessary, you'll learn to work with cloning and snapshot tools. In a test environment, it is often necessary to deploy virtual machines rapidly and to revert to a previous state in an easy way. VMware Workstation provides all the tools that are required for this purpose. In this article, you'll learn to work with cloning and snapshot tools that enable you to perform these tasks. (For more resources related to this topic, see here.) Understanding when to apply which tools A snapshot is a photo of a state of a virtual machine. As a virtual machine often requires a lot of work before a desired state of the machine is reached, it is a good idea to take a picture of that exact state. If something goes wrong at a later stage, having a snapshot makes it possible to easily revert to the previous state of the virtual machine. So the base concept of working with snapshots is to make it easier to revert to a previous state. A clone is a copy of a virtual machine. Using clones is convenient if several virtual machines are needed, with more or less the same configuration on each virtual machine. By cloning a virtual machine, you'll make a copy of the actual state of a machine. After making the clone, you'll just have to modify the properties of the virtual machine that need to be unique on that machine. In some ways, clones and snapshots are closely related. This is because you can create a clone of a snapshot of a virtual machine, but you can also clone the current state of a virtual machine, which in fact creates a snapshot of the virtual machine. To understand this, you need to understand the difference between a linked clone and a full clone. In a linked clone, only modifications are stored. This means that if something happens to the original state of the virtual machine (for example, the VM files get corrupted), the linked clone gets corrupted as well. It is, however, an approach that is very efficient with regard to available disk space. As only modifications are stored in a linked clone, the disk space requirement is minimal. Creating a linked clone is also a very fast process. A full clone is like a complete copy of a virtual machine. Creating a full clone is a much longer process as the entire virtual machine disk has to be copied over. It requires more disk space as well, but the benefit is that a full clone creates an independent virtual machine. Therefore, you're better off with full clones if you need a maximum amount of flexibility. You'll also learn how to work with snapshots and clones. Working with snapshots In this article, you'll learn how to create snapshots of virtual machines. You'll also learn how to use the Snapshot Manager to manage a setup where different snapshots are used. Creating snapshots To create a snapshot, you don't have to do anything with the virtual machine. You can create a snapshot irrespective of the actual state of the virtual machine, so it doesn't matter if it is currently active or not. If the machine is powered on, the current state of the virtual machine memory is included in the snapshot as well. This is a useful feature as it allows you to return to the exact state the machine was in while taking the snapshot. To create a snapshot of a virtual machine, select the virtual machine first. Then from the VM menu, navigate to Snapshot | Take snapshot. Here you are presented with a small dialog box where you can enter a short description of the snapshot. You should always enter some description as it may be clear now what the snapshot is being used for, but you probably won't know it anymore if you have a look at the virtual machine a few months later. Also, having a clear description for a snapshot makes it easier to identify the right snapshot in the Snapshot Manager. To make identifying the snapshot easier at a later stage, enter a clear description of what it is being used for Once the snapshot process has started, it will take a while to complete. You will see a progress bar in the lower-left part of the virtual machine window if it has been activated. In theory, you can just continue working in the virtual machine; in practice, you'll notice that it is slow and sometimes even very unresponsive. It's better to wait a while and give the snapshot process a few minutes to complete. The actual files of the snapshots will be created in the directory where the VMDK files of the virtual machine are stored. For each VMDK file, you'll find a snapshot file as well. You'll notice that the snapshot file is smaller as it only contains the modifications that were made since the last snapshot was taken; or if this is the first snapshot you have taken, it will contain the differences from the original virtual machine. For each VMDK file, a corresponding snapshot file is created Working with the Snapshot Manager The Snapshot Manager allows you to work with snapshots in the most flexible way. You can use it to revert to any virtual machine state, start from there, and build a completely different configuration so that you can create two development branches based on a specific snapshot state and decide which solution fits you best. You can find the Snapshot Manager by opening the virtual machine you want to manage and navigating to VM | Snapshot | Snapshot Manager. You'll now see the Snapshot Manager with all the snapshots that have been created for this virtual machine. The Snapshot Manager allows you to revert to any state of a virtual machine Working with snapshots from the Snapshot Manager is not hard to do at all. You'll just select the snapshot that you want to start from and restore it (irrespective of whether there are snapshots that have already been created based on the selected snapshot). Once restored, you can continue working on the virtual machine from the selected snapshot state. By default, in the Snapshot Manager you don't see autoprotect snapshots. Even if the Snapshot Manager shows an option that displays autoprotect snapshots as well, it is probably not a good idea to do that. You'll typically use the snapshots in Snapshot Manager to walk back a clearly defined path in the snapshots on your system. In autoprotect, there isn't really a plan, and even more importantly, the snapshots are removed automatically. Therefore, you should make sure to never create a snapshot that is based on an autoprotect snapshot. Creating clones A snapshot is a virtual machine that is in a specific state. When working with snapshots, there will still be one virtual machine that can easily be restored to a specific state. The major difference between a clone and a snapshot is that a clone is a new virtual machine that is independent of the original virtual machine. Even if there is some relation between a snapshot and a clone (for instance, you can create a clone based on a snapshot), a clone basically is a new virtual machine. This means that once you have created a clone of a virtual machine, you can even start creating new snapshots of that virtual machine. When creating clones, you really need to think well about what you want to use them for. If you just want to use them for your own convenience, a linked clone is the best solution. It can be created very fast, and it takes the least possible amount of disk space while it still offers full functionality. The major difference though is that you'll never be able to copy it as an independent virtual machine to another computer. If you need to do that, you'll need a full clone. There are different ways to create clones. No matter which method you use, you will have to make sure that the virtual machine is shut down before you can make the clone. This requirement exists because the virtual machine files cannot be modified while the cloning is in process. The most important reason for this is that VMware Workstation is used on a Linux host or Windows platform where filesystems are used that do not allow virtual machine files to be modified by different processes at the same time. You'll need a VMFS filesystem in a VMware ESXi environment if you want to be able to clone a virtual machine without shutting it down first. The most direct way of creating a clone is by using the Manage option in the VM menu. From this menu, select the Clone option to start the Clone wizard. The first step of the clone wizard asks where you want to create the clone from. This can either be from the actual state of the virtual machine or from a snapshot, if it has been created. If no fit snapshot exists, the wizard will show an error, indicating that it's not possible to create a clone based on a snapshot for the selected virtual machine. Selecting on the basis of what you want to clone After selecting what you want to create the clone on, you'll need to select between either a linked clone or a full clone. You have to realize that a full clone is a complete copy of a virtual machine, so you'll need the same amount of disk space that is used by the virtual machine as available disk space on the host computer. So if the virtual machine uses 60 gigabytes of disk space, you'll need at least 60 gigabytes of disk space on the host as well! After verifying that you have the required amount of free disk space, you can start the cloning process. In the last step of the wizard, specify the name that you want to assign to the clone as well as the location on the host operating system where the clone has to be stored. Once created, the clone will show in the VMware console as a new virtual machine, and that is also how it is to be considered. A cloned virtual machine appears as a completely new virtual machine in the VMware Workstation library Another way of creating clones is using the Snapshot Manager. The advantage is that using Snapshot Manager, you can easily select a snapshot that you want to create a clone of. Just select the snapshot state you want to use and click on the Clone button. Summary In this article, you learned how to work with cloning and snapshots of tools. You learned how to create snapshots and clones, and work with snapshot manager. Resources for Article: Further resources on this subject: VMware View 5 Desktop Virtualization [Article] Creating an Image Profile by cloning an existing one [Article] Installing Zenoss [Article]
Read more
  • 0
  • 0
  • 31789
article-image-introduction-sdn-transformation-legacy-sdn
Packt
07 Jun 2017
23 min read
Save for later

Introduction to SDN - Transformation from legacy to SDN

Packt
07 Jun 2017
23 min read
In this article, by Reza Toghraee, the author of the book, Learning OpenDayLight, we will: What is and what is not SDN Components of a SDN Difference between SDN and overlay The SDN controllers (For more resources related to this topic, see here.) You might have heard about Software-Defined Networking (SDN). If you are in networking industry this is a topic which probably you have studied initially when first time you heard about the SDN.To understand the importance of SDN and SDN controller, let's look at Google. Google silently built its own networking switches and controller called Jupiter. A home grown project which is mostly software driven and supports such massive scale of Google. The SDN base is There is a controller who knows the whole network. OpenDaylight (ODL), is a SDN controller. In other words, it's the central brain of the network. Why we are going towards SDN Everyone who is hearing about SDN, should ask this question that why are we talking about SDN. What problem is it trying to solve? If we look at traditional networking (layer 2, layer 3 with routing protocols such as BGP, OSPF) we are completely dominated by what is so called protocols. These protocols in fact have been very helpful to the industry. They are mostly standard. Different vendor and products can communicate using standard protocols with each other. A Cisco router can establish a BGP session with a Huawei switch or an open source Quagga router can exchange OSPF routes with a Juniper firewall. Routing protocol is a constant standard with solid bases. If you need to override something in your network routing, you have to find a trick to use protocols, even by using a static route. SDN can help us to come out of routing protocol cage, look at different ways to forward traffic. SDN can directly program each switch or even override a route which is installed by routing protocol. There are high-level benefits of using SDN which we explain few of them as follows: An integrated network: We used to have a standalone concept in traditional network. Each switch was managed separately, each switch was running its own routing protocol instance and was processing routing information messages from other neighbors. In SDN, we are migrating to a centralized model, where the SDN controller becomes the single point of configuration of the network, where you will apply the policies and configuration. Scalable layer 2 across layer 3:Having a layer 2 network across multiple layer 3 network is something which all network architects are interested and till date we have been using proprietary methods such as OTV or by using a service provider VPLS service. With SDN, we can create layer 2 networks across multiple switches or layer 3 domains (using VXLAN) and expand the layer 2 networks. In many cloud environment, where the virtual machines are distributed across different hosts in different datacenters, this is a major requirement. Third-party application programmability: This is a very generic term, isn't it? But what I'm referring to is to let other applications communicate with your network. For example,In many new distributed IP storage systems, the IP storage controller has ability to talk to network to provide the best, shortest path to the storage node. With SDN we are letting other applications to control the network. Of course this control has limitation and SDN doesn't allow an application to scrap the whole network. Flexible application based network:In SDN, everything is an application. L2/L3, BGP, VMware Integration, and so on all are applications running in SDN controller. Service chaining:On the fly you add a firewall in the path or a load balancer. This is service insertion. Unified wired and wireless: This is an ideal benefit, to have a controller which supports both wired and wireless network. OpenDaylight is the only controller which supports CAPWAP protocols which allows integration with wireless access points. Components of a SDN A software defined network infrastructure has two main key components: The SDN Controller (only one, could be deployed in a highly available cluster) The SDN enabled switches(multiple switches, mostly in a Clos topology in a datacenter):   SDN controller is the single brain of the SDN domain. In fact, an SDN domain is very similar to a chassis based switch. You can imagine supervisor or management module of a chassis based switch as a SDN controller and rest of the line card and I/O cards as SDN switches. The main difference between a SDN network and a chassis based switch is that you can scale out the SDN with multiple switches, where in a chassis based switch you are limited to the number of slots in that chassis: Controlling the fabric It is very important that you understand the main technologies involved in SDN. These methods are used by SDN controllers to manage and control the SDN network. In general, there are two methods available for controlling the fabric: Direct fabric programming: In this method, SDN controller directly communicates with SDN enabled switches via southbound protocols such as OpenFlow, NETCONF and OVSDB. SDN controller programs each switch member with related information about fabric, and how to forward the traffic. Direct fabric programming is the method used by OpenDaylight. Overlay:In overlay method, SDN controller doesn't rely on programing the network switches and routers. Instead it builds an virtual overlay network on top of existing underlay network. The underlay network can be a L2 or L3 network with traditional network switches and router, just providing IP connectivity. SDN controller uses this platform to build the overlay using encapsulation protocols such as VXLAN and NVGRE. VMware NSX uses overlay technology to build and control the virtual fabric. SDN controllers One of the key fundamentals of SDN is disaggregation. Disaggregation of software and hardware in a network and also disaggregation of control and forwarding planes. SDN controller is the main brain and controller of an SDN environment, it's a strategic control point within the network and responsible for communicating information to: Routers and switches and other network devices behind them. SDN controllers uses APIs or protocols (such as OpenFlow or NETCONF) to communicate with these devices. This communication is known as southbound Upstream switches, routers or applications and the aforementioned business logic (via APIs or protocols). This communication is known as northbound. An example for a northbound communication is a BGP session between a legacy router and SDN controller. If you are familiar with chassis based switches like Cisco Catalyst 6500 or Nexus 7k chassis, you can imagine a SDN network as a chassis, with switches and routers as its I/O line cards and SDN controller as its supervisor or management module. Infact SDN is similar to a very scalable chassis where you don't have any limitation on number of physical slots. SDN controller is similar to role of management module of a chassis based switch and it controls all switches via its southbound protocols and APIs. The following table compares the SDN controller and a chassis based switch:  SDN Controller Chassis based switch Supports any switch hardware Supports only specific switch line cards Can scale out, unlimited number of switches Limited to number of physical slots in the chassis Supports high redundancy by multiple controllers in a cluster Supports dual management redundancy, active standby Communicates with switches via southbound protocols such as OpenFlow, NETCONF, BGP PCEP Use proprietary protocols between management module and line cards Communicates with routers, switches and applications outside of SDN via northbound protocols such as BGP, OSPF and direct API Communicates with other routers and switches outside of chassis via standard protocols such as BGP, OSPF or APIs. The first protocol that popularized the concept behind SDN was OpenFlow. When conceptualized by networking researchers at Stanford back in 2008, it was meant to manipulate the data plane to optimize traffic flows and make adjustments, so the network could quickly adapt to changing requirements. Version 1.0 of the OpenFlow specification was released in December of 2009; it continues to be enhanced under the management of the Open Networking Foundation, which is a user-led organization focused on advancing the development of open standards and the adoption of SDN technologies. OpenFlow protocol was the first protocol that helped in popularizing SDN. OpenFlow is a protocol designed to update the flow tables in a switch. Allowing the SDN controller to access the forwarding table of each member switch or in other words to connect control plane and data plane in SDN world. Back in 2008, OpenFlow conceptualized by networking researchers at Stanford University, the initial use of OpenFlow was to alter the switch forwarding tables to optimize traffic flows and make adjustments, so the network could quickly adapt to changing requirements. After introduction of OpenFlow, NOX introduced as original OpenFlow controller (still there wasn't concept of SDN controller). NOX was providing a high-level API capable of managing and also developing network control applications. Separate applications were required to run on top of NOX to manage the network.NOX was initially developed by Nicira networks (which acquired by VMware, and finally became part of VMware NSX). NOX introduced along with OpenFlow in 2009. NOX was a closed source product but ultimately it was donated to SDN community which led to multiple forks and sub projects out of original NOX. For example, POX is a sub project of NOX which provides Python support. Both NOX and POX were early controllers. NOX appears an inactive development, however POX is still in use by the research community as it is a Python based project and can be easily deployed. POX is hosted at http://github.com/noxrepo/pox NOX apart from being the first OpenFlow or SDN controller also established a programing model which inherited by other subsequent controllers. The model was based on processing of OpenFlow messages, with each incoming OpenFlow message trigger an event that had to be processed individually. This model was simple to implement but not efficient and robust and couldn't scale. Nicira along with NTT and Google started developing ONIX, which was meant to be a more abstract and scalable for large deployments. ONIX became the base for Nicira (the core of VMware NSX or network virtualization platform) also there are rumors that it is also the base for Google WAN controller. ONIX was planned to become open source and donated to community but for some reasons the main contributors decided to not to do it which forced the SDN community to focus on developing other platforms. Started in 2010, a new controller introduced,the Beacon controller and it became one of the most popular controllers. It born with contribution of developers from Stanford University. Beacon is a Java-based open source OpenFlow controller created in 2010. It has been widely used for teaching, research, and as the basis of Floodlight. Beacon had the first built-in web user-interface which was a huge step forward in the market of SDN controllers. Also it provided a easier method to deploy and run compared to NOX. Beacon was an influence for design of later controllers after it, however it was only supporting star topologies which was one of the limitations on this controller. Floodlight was a successful SDN controller which was built as a fork of Beacon. BigSwitch networks is developing Floodlight along with other developers. In 2013, Beacon popularity started to shrink down and Floodlight started to gain popularity. Floodlight had fixed many issues of Beacon and added lots of additional features which made it one of the most feature rich controllers available. It also had a web interface, a Java-based GUI and also could get integrated with OpenStack using quantum plugin. Integration with OpenStack was a big step forward as it could be used to provide networking to a large pool of virtual machines, compute and storage. Floodlight adoption increased by evolution of OpenStack and OpenStack adopters. This gave Floodlight greater popularity and applicability than other controllers that came before. Most of controllers came after Floodlight also supported OpenStack integration. Floodlight is still supported and developed by community and BigSwitch networks, and is a base for BigCloud Fabric (the BigSwitch's commercial SDN controller). There are other open source SDN controllers which introduced such as Trema (ruby-based from NEC), Ryu (supported by NTT), FlowER, LOOM and the recent OpenMUL. The following table shows the current open source SDN controllers:  Active open source SDNcontroller Non-active open source SDN controllers Floodlight Beacon OpenContrail FlowER OpenDaylight NOX LOOM NodeFlow OpenMUL   ONOS   POX   Ryu   Trema     OpenDaylight OpenDaylight started in early 2013, and was originally led by IBM and Cisco. It was a new collaborative open source project. OpenDaylight hosted under Linux Foundation and draw support and interest from many developers and adopters. OpenDaylight is a platform to provide common foundations and a robust array of services for SDN environments. OpenDaylight uses a controller model which supports OpenFlow as well as other southbound protocols. It is the first open source controller capable of employing non-OpenFlow proprietary control protocols which eventually lets OpenDaylight to integrate with modern and multi-vendor networks. The first release of OpenDaylight in February 2014 with code name of Hydrogen, followed by Helium in September 2014. The Helium release was significant because it marked a change in direction for the platform that has influenced the way subsequent controllers have been architected. The main change was in the service abstraction layer, which is the part of the controller platform that resides just above the southbound protocols, such as OpenFlow, isolating them from the northbound side and where the applications reside. Hydrogen used an API-driven Service Abstraction Layer (AD-SAL), which had limitations specifically, it meant the controller needed to know about every type of device in the network AND have an inventory of drivers to support them. Helium introduced a Model-driven service abstraction layer (MD-SAL), which meant the controller didn't have to account for all the types of equipment installed in the network, allowing it to manage a wide range of hardware and southbound protocols. Helium release made the framework much more agile and adaptable to changes in the applications; an application could now request changes to the model, which would be received by the abstraction layer and forwarded to the network devices. The OpenDaylight platform built on this advancement in its third release, Lithium, which was introduced in June of 2015. This release focused on broadening the programmability of the network, enabling organizations to create their own service architectures to deliver dynamic network services in a cloud environment and craft intent-based policies. Lithium release was worked on by more than 400 individuals, and contributions from Big Switch Networks, Cisco, Ericsson, HP, NEC, and so on, making it one of the fastest growing open source projects ever. The fourth release, Beryllium come out in February of 2016 and the most recent fifth release, Boron released in September 2016. Many vendors have built and developed commercial SDN controller solutions based on OpenDaylight. Each product has enhanced or added features to OpenDaylight to have some differentiating factor. The use of OpenDaylight in different vendor products are: A base, but sell a commercial version with additional proprietary functionality—for example: Brocade, Ericsson, Ciena, and so on. Part of their infrastructure in their Network as a Service (or XaaS) offerings—for example: Telstra, IBM, and so on. Elements for use in their solution—for example: ConteXtream (now part of HP) Open Networking Operating System (ONOS), which was open sourced in December 2014 is focused on serving the needs of service providers. It is not as widely adopted as OpenDaylight, ONOS has been finding success and gaining momentum around WAN use cases. ONOS is backed by numerous organizations including AT&T, Cisco, Fujitsu, Ericsson, Ciena, Huawei, NTT, SK Telecom, NEC, and Intel, many of whom are also participants in and supporters of OpenDaylight. Apart from open source SDN controllers, there are many commercial, proprietary controllers available in the market. Products such as VMware NSX, Cisco APIC, BigSwitch Big Cloud Fabric, HP VAN and NEC ProgrammableFlow are example commercial and proprietary products. The following table lists the commercially available controllers and their relationship to OpenDaylight:  ODL-based ODL-friendly Non-ODL based Avaya Cyan (acquired by Ciena) BigSwitch Brocade HP Juniper Ciena NEC Cisco ConteXtream (HP) Nuage Plexxi Coriant   PLUMgrid Ericsson Pluribus Extreme Sonus Huawei (also ships non-ODL controller) VMware NSX Core features of SDN Regardless of an open source or a proprietary SDN platform, there are core features and capabilities which requires the SDN platform to support. These capabilities include: Fabric programmability:Providing the ability to redirect traffic, apply filters to packets (dynamically), and leverage templates to streamline the creation of custom applications. Ensuring northbound APIs allow the control information centralized in the controller available to be changed by SDN applications. This will ensure the controller can dynamically adjust the underlying network to optimize traffic flows to use the least expensive path, take into consideration varying bandwidth constraints, meet quality of service (QoS) requirements. Southbound protocol support:Enabling the controller to communicate to switches and routers and manipulate and optimize how they manage the flow of traffic. Currently OpenFlow is the most standard protocol used between different networking vendors, while there are other southbound protocols that can be used. A SDN platform should support different versions of OpenFlow in order to provide compatibility with different switching equipments. External API support:Ensuring the controller can be used within the varied orchestration and cloud environments such as VMware vSphere, OpenStack, and so on. Using APIs the orchestration platform can communicate with SDN platform in order to publish network policies. For example VMware vSphere shall talk to SDN platform to extend the virtual distributed switches(VDS) from virtual environment to the physical underlay network without any requirement form an network engineer to configure the network. Centralized monitoring and visualization:Since SDN controller has a full visibility over the network, it can offer end-to-end visibility of the network and centralized management to improve overall performance, simplify the identification of issues and accelerate troubleshooting. The SDN controller will be able to discover and present a logical abstraction of all the physical links in the network, also it can discover and present a map of connected devices (MAC addresses) which are related to virtual or physical devices connected to the network. The SDN controller support monitoring protocols, such as syslog, snmp and APIs in order to integrate with third-party management and monitoring systems. Performance: Performance in a SDN environment mainly depends on how fast SDN controller fills the flow tables of SDN enabled switches. Most of SDN controllers pre-populate the flow tables on switches to minimize the delay. When a SDN enabled switch receives a packet which doesn't find a matching entry in its flow table, it sends the packet to the SDN controller in order to find where the packet needs to get forwarded to. A robust SDN solution should ensure that the number of requests form switches are minimum and SDN controller doesn't become a bottleneck in the network. High availability and scalability: Controllers must support high availability clusters to ensure reliability and service continuity in case of failure of a controller. Clustering in SDN controller expands to scalability. A modern SDN platform should support scalability in order to add more controller nodes with load balancing in order to increase the performance and availability. Modern SDN controllers support clustering across multiple different geographical locations. Security:Since all switches communicate with SDN controller, the communication channel needs to be secured to ensure unauthorized devices doesn't compromise the network. SDN controller should secure the southbound channels, use encrypted messaging and mutual authentication to provide access control. Apart from that the SDN controller must implement preventive mechanisms to prevent from denial of services attacks. Also deployment of authorization levels and level controls for multi-tenant SDN platforms is a key requirement. Apart from the aforementioned features SDN controllers are likely to expand their function in future. They may become a network operating system and change the way we used to build networks with hardware, switches, SFPs and gigs of bandwidth. The future will look more software defined, as the silicon and hardware industry has already delivered their promises for high performance networking chips of 40G, 100G. Industry needs more time to digest the new hardware and silicons and refresh the equipment with new gears supporting 10 times the current performance. Current SDN controllers In this section, I'm putting the different SDN controllers in a table. This will help you to understand the current market players in SDN and how OpenDaylight relates to them:  Vendors/product Based on OpenDaylight? Commercial/open source Description Brocade SDN controller Yes Commercial It's a commercial version of OpenDaylight, fully supported and with extra reliable modules. Cisco APIC No Commercial Cisco Application Policy Infrastructure Controller (APIC) is the unifying automation and management point for the Application Centric Infrastructure (ACI) data center fabric. Cisco uses APIC controller and Nexus 9k switches to build the fabric. Cisco uses OpFlex as main southbound protocol. Erricson SDN controller Yes Commercial Ericsson's SDN controller is a commercial (hardened) version OpenDaylight SDN controller. Domain specific control applications that use the SDN controller as platform form the basis of the three commercial products in our SDN controller portfolio. Juniper OpenContrial /Contrail No Both OpenContrail is opensource, and Contrail itself is a commercial product. Juniper Contrail Networking is an open SDN solution that consists of Contrail controller, Contrail vRouter, an analytics engine, and published northbound APIs for cloud and NFV. OpenContrail is also available for free from Juniper. Contrail promotes and use MPLS in datacenter. NEC Programmable Flow No Commercial NEC provides its own SDN controller and switches. NEC SDN platform is one of choices of enterprises and has lots of traction and some case studies.   Avaya SDN Fx controller Yes Commercial Based on OpenDaylight, bundled as a solution package.   Big Cloud Fabric No Commercial BigSwitch networks solution is based on Floodlight opensource project. BigCloud Fabric is a robust, clean SDN controller and works with bare metal whitebox switches. BigCloud Fabric includes SwitchLightOS which is a switch operating system can be loaded on whitebox switches with Broadcom Trident 2 or Tomahawk silicons. The benefit of BigCloud Fabric is that you are not bound to any hardware and you can use baremetal switches from any vendor.   Ciena's Agility Yes Commercial Ciena's Agility multilayer WAN controller is built atop the open-source baseline of the OpenDaylight Project—an open, modular framework created by a vendor-neutral ecosystem (rather than a vendor-centric ego-system) that will enable network operators to source network services and applications from both Ciena's Agility and others. HP VAN (Virtual Application Network) No Commercial The building block of the HP open SDN ecosystem, the controller allows third-party developers to deliver innovative SDN solutions. Huawei Agile controller Yes and No (based on editions) Commercial Huawei's SDN controller which integrates as a solution with Huawei enterprise switches Nuage No Commercial Nuage Networks VSP provides SDN capabilities for clouds of all sizes. It is implemented as a non-disruptive overlay for all existing virtualized and non-virtualized server and network resources. Pluribus Netvisor No Commercial Netvisor Premium and Open Netvisor Linux are distributed network operating systems. Open Netvisor integrates atraditional, interoperable networking stack (L2/L3/VXLAN) with an SDN distributed controller that runs in everyswitch of the fabric. VMware NSX No Commercial VMware NSX is an Overlay type of SDN, which currently works with VMware vSphere. The plan is to support OpenStack in future. VMware NSX also has built-in firewall, router and L4 load balancers allowing micro segmentation. OpenDaylight as an SDN controller Previously, we went through the role of SDN controller, and a brief history of ODL.ODL is a modular open SDN platform which allows developers to build any network or business application on top of it to drive the network in the way they want. Currently OpenDaylight has reached to its fifth release (Boron, which is the fifth element in periodic table). ODL releases are named based on periodic table elements, started from first release the Hydrogen. ODL has a 6 month release period, with many developers working on expanding the ODL, 2 releases per year is expected from community. For technical readers to understand it more clearly, the following diagram will help: ODL platform has a broad set of use cases for multivendor, brown field, green fields, service providers and enterprises. ODL is a foundation for networks of the future. Service providers are using ODL to migrate their services to a software enabled level with automatic service delivery and coming out of circuit-based mindset of service delivery. Also they work on providing a virtualized CPE with NFV support in order to provide flexible offerings. Enterprises use ODL for many use cases, from datacenter networking, Cloud and NFS, network automation and resource optimization, visibility, control to deploying a fully SDN campus network. ODL uses a MD-SAL which makes it very scalable and lets it incorporate new applications and protocols faster. ODL supports multiple standard and proprietary southbound protocols, for example with full support of OpenFlow and OVSDB, ODL can communicate with any standard hardware (or even the virtual switches such as Open vSwitch(OVS) supporting such protocols). With such support, ODL can be deployed and used in multivendor environments and control hardware from different vendors from a single console no matter what vendor and what device it is, as long as they support standard southbound protocols. ODL uses a micro service architecture model which allows users to control applications, protocols and plugins while deploying SDN applications. Also ODL is able to manage the connection between external consumers and providers. The followingdiagram explains the ODL footprint and different components and projects within the ODL: Micro servicesarchitecture ODL stores its YANG data structure in a common data store and uses messaging infrastructure between different components to enable a model-driven approach to describe the network and functions. In ODL MD-SAL, any SDN application can be integrated as a service and then loaded into the SDN controller. These services (apps) can be chained together in any number and ways to match the application needs. This concept allows users to only install and enable the protocols and services they need which makes the system light and efficient. Also services and applications created by users can be shared among others in the ecosystem since the SDN application deployment for ODL follows a modular design. ODL supports multiple southbound protocols. OpenFlow and OpenFlow extension such as Table Type Patterns (TTP), as well as other protocols including NETCONF, BGP/PCEP, CAPWAP and OVSDB. Also ODL supports Cisco OpFlex protocol: ODL platform provides a framework for authentication, authorization and accounting (AAA), as well as automatic discovery and securing of network devices and controllers. Another key area in security is to use encrypted and authenticated communication trough southbound protocols with switches and routers within the SDN domain. Most of southbound protocols support security encryption mechanisms. Summary In this article we learned about SDN, and why it is important. We reviewed the SDN controller products, the ODL history as well as core features of SDN controllers and market leader controllers. We managed to dive in some details about SDN . Resources for Article: Further resources on this subject: Managing Network Devices [article] Setting Up a Network Backup Server with Bacula [article] Point-to-Point Networks [article]
Read more
  • 0
  • 0
  • 28980

article-image-learning-basic-powercli-concepts
Packt
05 Jan 2017
7 min read
Save for later

Learning Basic PowerCLI Concepts

Packt
05 Jan 2017
7 min read
In this article, by Robert van den Nieuwendijk, author of the book Learning PowerCLI - Second Edition, you will learn some basic PowerShell and PowerCLI concepts. Knowing these concepts will make it easier for you to learn the advanced topics. We will cover the Get-Command, Get-Help, and Get-Member cmdlets in this article. (For more resources related to this topic, see here.) Using the Get-Command, Get-Help, and Get-Member cmdlets There are some PowerShell cmdlets that everyone should know. Knowing these cmdlets will help you discover other cmdlets, their functions, parameters, and returned objects. Using Get-Command The first cmdlet that you should know is Get-Command. This cmdlet returns all the commands that are installed on your computer. The Get-Command cmdlet has the following syntax: Get-Command [[-ArgumentList] <Object[]>] [-All] [-ListImported] [-Module <String[]>] [-Noun <String[]>] [-ParameterName <String[]>] [-ParameterType <PSTypeName[]>] [-Syntax] [-TotalCount <Int32>] [-Verb <String[]>] [<CommonParameters>] Get-Command [[-Name] <String[]>] [[-ArgumentList] <Object[]>] [-All] [-CommandType <CommandTypes>] [-ListImported] [-Module <String[]>] [-ParameterName <String[]>] [-ParameterType <PSTypeName[]>] [-Syntax] [-TotalCount <Int32>] [<CommonParameters>] The first parameter set is named CmdletSet, and the second parameter set is named AllCommandSet. If you type the following command, you will get a list of commands installed on your computer, including cmdlets, aliases, functions, workflows, filters, scripts, and applications: PowerCLI C:> Get-Command You can also specify the name of a specific cmdlet to get information about that cmdlet, as shown in the following command: PowerCLI C:> Get-Command –Name Get-VM This will return the following information about the Get-VM cmdlet: CommandType Name ModuleName ----------- ---- ---------- Cmdlet Get-VM VMware.VimAutomation.Core You see that the command returns the command type and the name of the module that contains the Get-VM cmdlet. CommandType, Name, and ModuleName are the properties that the Get-VM cmdlet returns by default. You will get more properties if you pipe the output to the Format-List cmdlet. The following screenshot will show you the output of the Get-Command –Name Get-VM | Format-List * command: You can use the Get-Command cmdlet to search for cmdlets. For example, if necessary, search for the cmdlets that are used for vSphere hosts. Type the following command: PowerCLI C:> Get-Command -Name *VMHost* If you are searching for the cmdlets to work with networks, use the following command: PowerCLI C:> Get-Command -Name *network* Using Get-VICommand PowerCLI has a Get-VICommand cmdlet that is similar to the Get-Command cmdlet. The Get-VICommand cmdlet is actually a function that creates a filter on the Get-Command output, and it returns only PowerCLI commands. Type the following command to list all the PowerCLI commands: PowerCLI C:> Get-VICommand The Get-VICommand cmdlet has only one parameter –Name. So, you can also type, for example, the following command to get information only about the Get-VM cmdlet: PowerCLI C:> Get-VICommand –Name Get-VM Using Get-Help To discover more information about cmdlets, you can use the Get-Help cmdlet. For example: PowerCLI C:> Get-Help Get-VM This will display the following information about the Get-VM cmdlet: The Get-Help cmdlet has some parameters that you can use to get more information. The –Examples parameter shows examples of the cmdlet. The –Detailed parameter adds parameter descriptions and examples to the basic help display. The –Full parameter displays all the information available about the cmdlet. And the –Online parameter retrieves online help information available about the cmdlet and displays it in a web browser. Since PowerShell V3, there is a new Get-Help parameter –ShowWindow. This displays the output of Get-Help in a new window. The Get-Help -ShowWindow command opens the following screenshot: Using Get-PowerCLIHelp The PowerCLI Get-PowerCLIHelp cmdlet opens a separate help window for PowerCLI cmdlets, PowerCLI objects, and articles. This is a very useful tool if you want to browse through the PowerCLI cmdlets or PowerCLI objects. The following screenshot shows the window opened by the Get-PowerCLIHelp cmdlet: Using Get-PowerCLICommunity If you have a question about PowerCLI and you cannot find the answer in this article, use the Get-PowerCLICommunity cmdlet to open the VMware vSphere PowerCLI section of the VMware VMTN Communities. You can log in to the VMware VMTN Communities using the same My VMware account that you used to download PowerCLI. First, search the community for an answer to your question. If you still cannot find the answer, go to the Discussions tab and ask your question by clicking on the Start a Discussion button, as shown later. You might receive an answer to your question in a few minutes. Using Get-Member In PowerCLI, you work with objects. Even a string is an object. An object contains properties and methods, which are called members in PowerShell. To see which members an object contains, you can use the Get-Member cmdlet. To see the members of a string, type the following command: PowerCLI C:> "Learning PowerCLI" | Get-Member Pipe an instance of a PowerCLI object to Get-Member to retrieve the members of that PowerCLI object. For example, to see the members of a virtual machine object, you can use the following command: PowerCLI C:> Get-VM | Get-Member TypeName: VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl Name MemberType Definition ---- ---------- ---------- ConvertToVersion Method T VersionedObjectInterop.Conver... Equals Method bool Equals(System.Object obj) GetConnectionParameters Method VMware.VimAutomation.ViCore.Int... GetHashCode Method int GetHashCode() GetType Method type GetType() IsConvertableTo Method bool VersionedObjectInterop.IsC... LockUpdates Method void ExtensionData.LockUpdates() ObtainExportLease Method VMware.Vim.ManagedObjectReferen... ToString Method string ToString() UnlockUpdates Method void ExtensionData.UnlockUpdates() CDDrives Property VMware.VimAutomation.ViCore.Typ... Client Property VMware.VimAutomation.ViCore.Int... CustomFields Property System.Collections.Generic.IDic... DatastoreIdList Property string[] DatastoreIdList {get;} Description Property string Description {get;} DrsAutomationLevel Property System.Nullable[VMware.VimAutom... ExtensionData Property System.Object ExtensionData {get;} FloppyDrives Property VMware.VimAutomation.ViCore.Typ... Folder Property VMware.VimAutomation.ViCore.Typ... FolderId Property string FolderId {get;} Guest Property VMware.VimAutomation.ViCore.Typ... GuestId Property string GuestId {get;} HAIsolationResponse Property System.Nullable[VMware.VimAutom... HardDisks Property VMware.VimAutomation.ViCore.Typ... HARestartPriority Property System.Nullable[VMware.VimAutom... Host Property VMware.VimAutomation.ViCore.Typ... HostId Property string HostId {get;} Id Property string Id {get;} MemoryGB Property decimal MemoryGB {get;} MemoryMB Property decimal MemoryMB {get;} Name Property string Name {get;} NetworkAdapters Property VMware.VimAutomation.ViCore.Typ... Notes Property string Notes {get;} NumCpu Property int NumCpu {get;} PersistentId Property string PersistentId {get;} PowerState Property VMware.VimAutomation.ViCore.Typ... ProvisionedSpaceGB Property decimal ProvisionedSpaceGB {get;} ResourcePool Property VMware.VimAutomation.ViCore.Typ... ResourcePoolId Property string ResourcePoolId {get;} Uid Property string Uid {get;} UsbDevices Property VMware.VimAutomation.ViCore.Typ... UsedSpaceGB Property decimal UsedSpaceGB {get;} VApp Property VMware.VimAutomation.ViCore.Typ... Version Property VMware.VimAutomation.ViCore.Typ... VMHost Property VMware.VimAutomation.ViCore.Typ... VMHostId Property string VMHostId {get;} VMResourceConfiguration Property VMware.VimAutomation.ViCore.Typ... VMSwapfilePolicy Property System.Nullable[VMware.VimAutom... The command returns the full type name of the VirtualMachineImpl object and all its methods and properties. Remember that the properties are objects themselves. You can also use Get-Member to get the members of the properties. For example, the following command line will give you the members of the VMGuestImpl object: PowerCLI C:> $VM = Get-VM –Name vCenter PowerCLI C:> $VM.Guest | Get-Member Summary In this article, you looked at the Get-Help, Get-Command, and Get-Member cmdlets. Resources for Article: Further resources on this subject: Enhancing the Scripting Experience [article] Introduction to vSphere Distributed switches [article] Virtualization [article]
Read more
  • 0
  • 0
  • 27437

article-image-its-black-friday-but-whats-the-business-and-developer-cost-of-downtime
Richard Gall
23 Nov 2018
4 min read
Save for later

It's Black Friday: But what's the business (and developer) cost of downtime?

Richard Gall
23 Nov 2018
4 min read
Black Friday is back, and, as you've probably already noticed, with a considerable vengeance. According to Adobe Analytics data, online spending is predicted to hit $3.7 billion over this holiday season in the U.S, up from $2.9 billion in 2017. But while consumers clamour for deals and businesses reap the rewards, it's important to remember there's a largely hidden plane of software engineering labour. Without this army of developers, consumers will most likely be hitting their devices in frustration, while business leaders will be missing tough revenue targets - so, as we enter into Black Friday let's pour one out for all those engineers on call and trying their best to keep eCommerce sites on their feet. Here's to the software engineers keeping things running on Black Friday Of course, the pain that hits on days like Black Friday and Cyber Monday can be minimised with smart planning and effective decision making long before those sales begin. However, for engineering teams under-resourced and lacking the right tools, that is simply impossible. This means that software engineers are left in a position where they're treading water, knowing that they're going to be sinking once those big days come around. It doesn't have to be like this. With smarter leadership and, indeed, more respect for the intensive work engineers put in to make websites and apps actually work, revenue driving platforms can become more secure, resilient and stable. Chaos engineering platform Gremlin publishes the 'true cost of downtime' This is the central argument of chaos engineering platform Gremlin, who we've covered a number of times this year. To coincide with Black Friday the team has put together what they believe is the 'true cost of downtime'. On the one hand this is a good marketing hook for their chaos engineering platform, but, cynicism aside, it's also a good explanation of why the principles of chaos engineering can be so valuable from both a business and developer perspective. Estimating the annual revenue of some of the biggest companies in the world, Gremlin has been then created an interactive table to demonstrate what the cost of downtime for each of those businesses would be, for the length of time you are on the page. For 20 minutes downtime, Amazon.com would have lost a staggering $4.4 million. For Walgreens it's more than $80,000. Gremlin provide some context to all this, saying: "Enterprise commerce businesses typically rely on a complex microservices architecture, from fulfillment, to website security, ability to scale with holiday traffic, and payment processing - there is a lot that can go wrong and impact revenue, damage customer trust, and consume engineering time. If an ecommerce site isn’t 100% online and performant, it’s losing revenue." "The holiday season is especially demanding for SREs working in ecommerce. Even the most skilled engineering teams can struggle to keep up with the demands of peak holiday traffic (i.e. Black Friday and Cyber Monday). Just going down for a few seconds can mean thousands in lost revenue, but for some sites, downtime can be exponentially more expensive." For Gremlin, chaos engineering is clearly the answer to many of the problems days like Black Friday poses. While it might not work for every single organization, it's nevertheless true that failing to pay attention to the value of your applications and websites at an hour by hour level could be incredibly damaging. With outages on Facebook, WhatsApp, and Instagram happening earlier this week, these problems aren't hidden away - they're in full view of the public. What does remain hidden, however, is the work and stress that goes in to tackling these issues and ensuring things are working as they should be. Perhaps it's time to start learning the lessons of Black Friday - business revenues will be that little bit healthier, but engineers will also be that little bit happier. 
Read more
  • 0
  • 0
  • 25874
article-image-new-for-2020-in-operations-and-infrastructure-engineering
Richard Gall
19 Dec 2019
5 min read
Save for later

New for 2020 in operations and infrastructure engineering

Richard Gall
19 Dec 2019
5 min read
It’s an exciting time if you work in operations and software infrastructure. Indeed, you could even say that as the pace of change and innovation increases, your role only becomes more important. Operations and systems engineers, solution architects, everyone - you’re jobs are all about bringing stability, order and control into what can sometimes feel like chaos. As anyone that’s been working in the industry knows, managing change, from a personal perspective, requires a lot of effort. To keep on top of what’s happening in the industry - what tools are being released and updated, what approaches are gaining traction - you need to have one eye on the future and the wider industry. To help you with that challenge and get you ready for 2020, we’ve put together a list of what’s new for 2020 - and what you should start learning. Learn how to make Kubernetes work for you It goes without saying that Kubernetes was huge in 2019. But there are plenty of murmurs and grumblings that it’s too complicated and adds an additional burden for engineering and operations teams. To a certain extent there’s some truth in this - and arguably now would be a good time to accept that just because it seems like everyone is using Kubernetes, it doesn’t mean it’s the right solution for you. However, having said that, 2020 will be all about understanding how to make Kubernetes relevant to you. This doesn’t mean you should just drop the way you work and start using Kubernetes, but it does mean that spending some time with the platform and getting a better sense of how it could be used in the future is a useful way to spend your learning time in 2020. Explore Packt's extensive range of Kubernetes eBooks and videos on the Packt store. Learn how to architect If software has eaten the world, then by the same token perhaps complexity has well and truly eaten software as we know it. Indeed, Kubernetes is arguably just one of the symptoms and causes of this complexity. Another is the growing demand for architects in engineering and IT teams. There are a number of different ‘architecture’ job roles circulating across the industry, from solutions architect to application architect. While they each have their own subtle differences, and will even vary from company to company, they’re all roles that are about organizing and managing different pieces into something that is both stable and value-driving. Cloud has been particularly instrumental in making architect roles more prominent in the industry. As organizations look to resist the pitfalls of lock-in and better manage resources (financial and otherwise), it will be down to architects to balance business and technology concerns carefully. Learn how to architect cloud native applications. Read Architecting Cloud Computing Solutions. Get to grips with everything you need to know to be a software architect. Pick up Software Architect's Handbook. Artificial intelligence It’s strange that the hype around AI doesn’t seem to have reached the world of ops. Perhaps this is because the area is more resistant to the spin that comes with AI, preferring instead to focus more on the technical capabilities of tools and platforms. Whatever the case, it’s nevertheless true that AI will play an important part in how we manage and secure infrastructure. From monitoring system health, to automating infrastructure deployments and configuration, and even identifying security threats, artificial intelligence is already an important component for operations engineers and others. Indeed, artificial intelligence is being embedded inside products and platforms that ops teams are using - this means the need to ‘learn’ artificial intelligence is somewhat reduced. But it would be wrong to think it’s something that can just be managed from a dashboard. In 2020 it will be essential to better understand where and how artificial intelligence can fit into your operations and architectural toolchain. Find artificial intelligence eBooks and videos in Packt's collection of curated data science bundles. Observability, monitoring, tracing, and logging One of the challenges of software complexity is understanding exactly what’s going on under the hood. Yes, the network might be unreliable, as the saying goes, but what makes things even worse is that we’re not even sure why. This is where observability and the next generation of monitoring, logging and tracing all come into play. Having detailed insights into how applications and infrastructures are performing, how resources are being managed, and what things are actually causing problems is vitally important from a team perspective. Without the ability to understand these things, it can put pressure on teams as knowledge becomes siloed inside the brains of specific engineers. It makes you vulnerable to failure as you start to have points of failure at a personnel level. There are, of course, a wide range of tools and products available that can make monitoring and tracing easy (or easier, at least). But understanding which ones are right for your needs still requires some time learning and exploring the options out there. Make sure you do exactly that in 2020. Learn how to monitor distributed systems with Learn Centralized Logging and Monitoring with Kubernetes. Making serverless a reality We’ve talked about serverless a lot this year. But as a concept there’s still considerable confusion about what role it should play in modern DevOps processes. Indeed, even the nomenclature is a little confusing. Platforms using their own terminology, such as ‘lambdas’ and ‘functions’, only adds to the sense that serverless is something amorphous and hard to pin down. So, in 2020, we need to work out how to make serverless work for us. Just as we need to consider how Kubernetes might be relevant to our needs, we need to consider in what ways serverless represents both a technical and business opportunity. Search Packt's library for the latest serverless eBooks and videos. Explore more technology eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 24454

article-image-installing-virtualbox-linux
Packt
14 Apr 2010
4 min read
Save for later

Installing VirtualBox on Linux

Packt
14 Apr 2010
4 min read
Time for action – downloading and Installing VirtualBox on Linux Ok, for this exercise you'll need a copy of Ubuntu Linux already installed on your PC. I chose Ubuntu because it's one of the friendliest Linux distributi ons available, as you will see in a moment. Before installing VirtualBox, you'll need to install two additional packages on your Ubuntu system. Open a terminal window (Applications | Accessories | Terminal), and type sudo apt-get update, followed by Enter. If Ubuntu asks for your administrative password, type it, and hit Enter to continue. Once the package list is updated, type sudo apt-get install dkms, and hit Enter; then type Y and hit Enter to install the DKMS package. The other package needed before you can install VirtualBox is build-essential. This package contains all the compiling tools VirtualBox needs to build the kernel module. Type sudo apt-get install build-essential, and hit Enter. Then type Y, and hit Enter again to conti nue. Wait for the $ prompt to show up again, type exit, and hit Enter to close the terminal window. Now you can proceed to install VirtualBox. Open the Synaptic Package Manager (System | Administration | Synaptic Package Manager), and select the Settings | Repositories option in the menu bar (if Ubuntu asks for your administrative password, type it, and press Enter to continue) : The Software Sources dialog will appear. Click on the Other (on earlier Ubuntu versions the name of this tab is Third-Party Software)and then on the Add+ button: Another dialog box will show up. Now type deb http://download.virtualbox.org/virtualbox/debian karmic non-free on the APT line field, and click on the Add Source button: If you're not using Ubuntu 9.10 Karmic Koala, then you'll need to change the APT line in the previous step. For example, if you're using Ubuntu 9.04 Jaunty Jackalope, replace the karmic part with jaunty. On the http://www.virtualbox.org/wiki/Linux_Downloads webpage, you'll find more information about installing VirtualBox on several Linux distributions and the APT line required for each Ubuntu distribution available. The third-party software source for VirtualBox will now show up on the list: Now open a terminal window (Applications | Accessories | Terminal), and type wget -q http://download.virtualbox.org/virtualbox/debian/sun_vbox.asc to download the Sun public key: Go back to the Synaptic Manager, select the Authentication tab, and click on the Import Key File button: The Import Key dialog will appear next. Select the sun_vbox.asc file you just downloaded, and click on the OK button to continue: The Sun public key for VirtualBox should now appear on the list: You can now delete the Sun public key file you downloaded earlier. Click on the Close button to return to the Synaptic Package Manager. If the Repositories Changed dialog shows up, select the Never show this message again checkbox, and click on Close to continue. Now click on the Synaptic Package Manager's Reload button to update your package sources with the most recent VirtualBox version: Once the Synaptic Package Manager finishes updating the package sources list, click on the Origin button located at the lower-left part of the window and select the download.virtualbox.org/non-free repository from the window above this button: Click on the most recent virtualbox-3.X package checkbox in the right window, and select the Mark for Installation option: When upgrading to a newer VirtualBox version, you must first completely remove the older version. Then you'll be able to install the newest version without any hassles. The Mark additional required changes? dialog box will appear next. Click on the Mark button to mark all the additional packages required to install VirtualBox: Now click on the Apply button in the Synaptic Package Manager: The Apply the following changes? dialog box will appear next. Make sure the Download package files only option is deselected, and click on the Apply button to start installing the required packages, along with VirtualBox: The Synaptic Package Manager will start downloading the required packages and, when finished, it will install them along with VirtualBox.
Read more
  • 0
  • 0
  • 24222
Modal Close icon
Modal Close icon