Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-pfsense-essentials
Packt
06 Sep 2016
60 min read
Save for later

pfSense Essentials

Packt
06 Sep 2016
60 min read
In this article by David Zientara, the author of the book Mastering pfSense, While high-speed Internet connectivity is becoming more and more common, many in the online world—especially those with residential connections or small office/home office (SOHO) setups—lack the hardware to fully take advantage of those speeds. Fiber optic technology brings with it the promise of a gigabit speed or greater, and the technology surrounding traditional copper networks is also yielding improvements. Yet many people are using consumer-grade routers that offer, at best, mediocre performance. (For more resources related to this topic, see here.) pfSense, an open source router/firewall solution is a far better alternative that is available to you. You have likely already downloaded, installed, and configured pfSense, possibly in a residential or SOHO environment. As an intermediate-level pfSense user, you do not need to be sold on the benefits of pfSense. Nevertheless, you may be looking to deploy pfSense in a different environment (for example, a corporate network), or you may just be looking to enhance your knowledge of pfSense. This chapter is designed to review the process of getting your pfSense system up and running. It will guide you through the process of choosing the right hardware for your deployment, but it will not provide a detailed treatment of installation and initial configuration. The emphasis will be on troubleshooting, as well as some of the newer configuration options. Finally, the article will provide a brief treatment of how to upgrade, back up, and restore pfSense. This article will cover the following topics: A brief overview of the pfSense project pfSense deployment scenarios Minimum specifications and hardware sizing guidelines An introduction to Virtual local area networks (VLANs) and Domain Name System (DNS) The best practices for installation and configuration Basic configuration from both the console and the pfSense web GUI Upgrading, backing up, and restoring pfSense pfSense project overview The origins of pfSense can be traced to the OpenBSD packet filter known as PF, which was incorporated into FreeBSD in 2001. As PF is limited to a command-line interface, several projects have been launched in order to provide a graphical interface for PF. m0n0wall, which was released in 2003, was the earliest attempt at such a project. pfSense began as a fork of the m0n0wall project. Version 1.0 of pfSense was released on October 4, 2006. Version 2.0 was released on September 17, 2011. Version 2.1 was released on September 15, 2013, and Version 2.2 was released on January 23, 2015. As of writing this, Version 2.2.6 (released on December 21, 2015) is the latest version. Version 2.3 is expected to be released soon, and will be a watershed release in many respects. The web GUI has had a major facelift, and support for some legacy technologies is being phased out. Support for Point-to-Point Tunnelling Protocol (PPTP) will be discontinued, as will support for Wireless Encryption Protocol (WEP). The current version of pfSense incorporates such functions as traffic shaping, the ability to act as a Virtual Private Network (VPN) client or server, IPv6 support, and through packages, intrusion detection and prevention, the ability to act as a proxy server, spam and virus blocking, and much more. Possible deployment scenarios Once you have decided to add a pfSense system to your network, you need to consider how it is going to be deployed on your network. pfSense is suitable for a variety of networks, from small to large ones, and can be employed in a variety of deployment scenarios. In this article, we will cover the following possible uses for pfSense: Perimeter firewall Router Switch Wireless router/wireless access point The most common way to add pfSense to your network is to use it as a perimeter firewall. In this scenario, your Internet connection is connected to one port on the pfSense system, and your local network is connected to another port on the system. The port connected to the Internet is known as the WAN (wide area network) interface, and the port connected to the local network is known as the LAN (local area network) interface. Diagram showing a deployment in which pfSense is the perimeter firewall. If pfSense is your perimeter firewall, you may choose to set it up as a dedicated firewall, or you might want to have it perform the double duty of a firewall and a router. You may also choose to have more than two interfaces in your pfSense system (known as optional interfaces). In order to act as a perimeter firewall, however, a pfSense system requires at least two interfaces: a WAN interface (to connect to outside networks), and a LAN interface (to connect to the local network). In more complex network setups, your pfSense system may have to exchange routing information with other routers on the network. There are two types of protocols for exchanging such information: distance vector protocols obtain their routing information by exchanging information with neighboring routers; Routers that use link-state protocols to build a map of the network in order to calculate the shortest path to another router, with each router calculating distances independently. pfSense is capable of running both types of protocols. Packages are available for distance vector protocols such as RIP and RIPv2, and link-state protocols such as Border Gateway Protocol (BGP). Another common deployment scenario is to set up pfSense as a router. In a home or SOHO environment, firewall and router functions are often performed by the same device. In mid-sized to large networks, however, the router is a device separate from that of the perimeter firewall. In larger networks, which have several network segments, pfSense can be used to connect these segments. In corporate-type environments, these are often used in conjunction, which allows a single network interface card (NIC) to operate in multiple broadcast domains via 802.1q tagging. VLANs are often used with the ever-popular router on a stick configuration, in which the router has a single physical connection to a switch, with the single Ethernet interface divided into multiple VLANs, and the router forwarding packets between the VLANs. One of the advantages of this setup is that it only requires a single port, and, as a result, it allows us to use pfSense with systems on when adding another NIC would be cumbersome or even impossible: for example, a laptop or certain thin clients. In most cases, where pfSense is deployed as a router on mid-sized and large networks, it would be used to connect different LAN segments; however, it could also be used as a WAN router. In this case, pfSense's function would be to provide a private WAN connection to the end user. Another possible deployment scenario is to use pfSense as a switch. If you have multiple interfaces on your pfSense system and bridge them together, pfSense can function as a switch. This is a far less common scenario, however, for several reasons: Using pfSense as a switch is generally not cost-effective. You can purchase a 5-port Ethernet switch for less than what it would cost to purchase the hardware for a pfSense system. Buying a commercially available switch will also save you money in the long run, as they likely would consume far less power than whatever computer you would be using to run pfSense. Commercially available switches will likely outperform pfSense, as pfSense will process all packets that pass between ports, while a typical Ethernet switch will handle it locally with dedicated hardware made specifically for passing data between ports quickly. While you can disable filtering entirely in pfSense if you know what you're doing, you will still be limited by the speed of the bus on which your network cards reside, whether it is PCI, PCI-X, or PCI Express (PCI-e). There is also the administrative overhead of using pfSense as a switch. Simple switches are designed to be plug-and-play, and setting up these switches is as easy as plugging in your Ethernet cables and the power cord. Managed switches typically enable you to configure settings at the console and/or through a web interface, but in many cases, configuration is only necessary if you want to modify the operation of the switch. If you use pfSense as a switch, however, some configuration will be required. If none of this intimidates you, then feel free to use pfSense as a switch. While you're not likely to achieve the performance level or cost savings of using a commercially available switch, you will likely learn a great deal about pfSense and networking in the process. Moreover, advances in hardware could make using pfSense as a switch viable at some point in the future. Advances in low-power consumption computers are one factor that could make this possible. Yet another possibility is using pfSense as a wireless router/access point. A sizable proportion of modern networks incorporate some type of wireless connectivity. Connecting to networks wireless is not only easier, but in some cases, running Ethernet cable is not a realistic option. With pfSense, you can add wireless networking capabilities to your system by adding a wireless network card, provided that the network card is supported by FreeBSD. Generally, however, using pfSense as a wireless router or access point is not the best option. Support for wireless network cards in FreeBSD leaves something to be desired. Support for the IEEE's 802.11b and g standards is OK, but support for 802.11n and 802.11ac is not very good. A more likely solution is to buy a wireless router (even if it is one of the aforementioned consumer-grade units), set it up to act solely as an access point, connect it to the LAN port of your pfSense system, and let pfSense act as a Dynamic Host Configuration Protocol (DHCP) server. A typical router will work fine as a dedicated wireless access point, and they are more likely to support the latest wireless networking standards than pfSense. Another possibility is to buy a dedicated wireless access point. These are generally inexpensive and some have such features as multiple SSIDs, which allow you to set up multiple wireless networks (for example, you could have a separate guest network which is completely isolated from other local networks). Using pfSense as a router, in combination with a commercial wireless access point, is likely the least troublesome option. Hardware requirements and sizing guidelines Once you have decided where to deploy pfSense on your network, you should have a clearer idea of what your hardware requirements are. As a minimum, you will need a CPU, motherboard, memory (RAM), some form of disk storage, and at least two network interfaces (unless you are opting for a router on a stick setup, in which case you only need one network interface). You may also need one or more optional interfaces. Minimum specifications The starting point for our discussion on hardware requirements is the pfSense minimum specifications. As of January 2016, the minimum hardware requirements are as follows (these specifications are from the official pfSense site, pfsense.org): CPU – 500 MHz (1 GHz recommended) RAM – 256 MB (1 GB recommended) There are two architectures currently supported by pfSense: i386 (32-bit) and amd64 (64-bit). There are three separate images provided for these architectures: CD, CD on a USB memstick, and embedded. There is also an image for the Netgate RCC-VE 2440 system. A pfSense installation requires at least 1 GB of disk space. If you are installing to an embedded device, you can access the console either by a serial or VGA port. A step-by-step installation guide for the pfSense Live CD can be found on the official pfSense website at: https://doc.pfsense.org/index.php/PfSense_IO_installation_step_by_step. Version 2.3 eliminated the Live CD, which allowed you to try out pfSense without installing it onto other media. If you really want to use the Live CD, however, you could use a pre-2.3 image (version 2.2.6 or earlier). You can always upgrade to the latest version of pfSense after installation. Installation onto either a hard disk drive (HDD) or an SSD is the most common option for a full install of pfSense, whereas embedded installs typically use CF, SD, or USB media. A full install of the current version of pfSense will fit onto a 1 GB drive but will leave little room for installation of packages or for log files. Any activity that requires caching, such as running a proxy server, will also require additional disk space. The last installation option in the table is installation onto an embedded system. For the embedded version, pfSense uses NanoBSD, a tool for installing FreeBSD onto embedded systems. Such an install is ideal for a dedicated appliance (for example, a VPN server), and is geared toward fewer file writes. However, embedded installs cannot run some of the more interesting packages. Hardware sizing guidelines The minimum hardware requirements are general guidelines, and you may want to exceed these minimums based on different factors. It may be useful to consider these factors when determining what CPU, memory, and storage device to use. For the CPU, requirements increase for faster Internet connections. Guidelines for the CPU and network cards can be found at the official pfSense site at http://pfsense.org/hardware/#requirements. The following general guidelines apply: The minimum hardware specifications (Intel/AMD CPU of 500 MHz or greater) are valid up to 20 Mbps. CPU requirements begun to increase at speeds greater than 20 Mbps. Connections of 100 Mbps or faster will require PCI-e network adapters to keep up with the increased network throughput. If you intend to use pfSense to bridge interfaces—for example, if you want to bridge a wireless and wired network, or if you want to use pfSense as a switch—then the PCI bus speed should be considered. The PCI bus can easily become a bottleneck. Therefore, in such scenarios, using PCI-e hardware is the better option, as it offers up to 31.51 GB/s (for PCI-e v. 4.0 on a 16-lane slot) versus 533 MB/s for the fastest conventional PCI buses. If you plan on using pfSense as a VPN server, then you should take into account the effect VPN usage will have on the CPU. Each VPN connection requires the CPU to encrypt traffic, and the more connections there are, the more the CPU will be taxed. Generally, the most cost-effective solution is to use a more powerful CPU. But there are ways to reduce the CPU load from VPN traffic. Soekris has the vpn14x1 product range; these cards offload the CPU of the computing intensive tasks of encryption and compression. AES-NI acceleration of IPsec also significantly reduces the CPU requirements. If you have hundreds of simultaneous captive portal users, you will require slightly more CPU power than you would otherwise. Captive portal usage does not put as much of a load on the CPU as VPN usage, but if you anticipate having a lot of captive portal users, you will want to take this into consideration. If you're not a power user, 256 MB of RAM might be enough for your pfSense system. This, however, would leave little room for the state table (where, as mentioned earlier, active connections are tracked). Each state requires about 1 KB of memory, which is less memory than some consumer-grade routers require, but you still want to be mindful of RAM if you anticipate having a lot of simultaneous connections. The other components of pfSense require 32 to 48 MB of RAM, and possibly more, depending on which features you are using, so you have to subtract that from the available memory in calculating the maximum state table size. RAM Maximum Connections (States) 256 MB ~22,000 connections 512 MB ~46,000 connections 1 GB ~93,000 connections 2 GB ~190,000 connections Installing packages can also increase your RAM requirements; Snort and ntop are two such examples. You should also probably not install packages if you have limited disk space. Proxy servers in particular use up a fair amount of disk space, which is something you should probably consider if you plan on installing a proxy server such as Squid. The amount of disk space, as well as the form of storage you utilize, will likely be dictated by what packages you install, and what forms of logging you will have enabled. Some packages are more taxing on storage than others. Some packages require more disk space than others. Proxies such as Squid store web pages; anti-spam programs such as pfBlocker download lists of blocked IP addresses, and therefore require additional disk space. Proxies also tend to perform a great deal of read and write operations; therefore, if you are going to install a proxy, disk I/O performance is something you should likely take into consideration. You may be tempted to opt for the cheapest NICs. However, inexpensive NICs often have complex drivers that offload most of the processing to the CPU. They can saturate your CPU with interrupt handling, thus causing missed packets. Cheaper network cards typically have smaller buffers (often no more than 300 KB), and when the buffers become full, packets are dropped. In addition, many of them do not support Ethernet frames that are larger than the maximum transmission unit (MTU) of 1500 bytes. NICs that do not support larger frames cannot send or receive jumbo frames (frames with an MTU larger than 1500 bytes), and therefore they cannot take advantage of the performance improvement that using jumbo frames would bring. In addition, such NICs will often have problems with VLAN traffic, since a VLAN tag increases the size of the Ethernet header beyond the traditional size limit. The pfSense project recommends NICs based on Intel chipsets, and there are several reasons why such NICs are considered reliable. They tend to have adequately sized buffers, and do not have problems processing larger frames. Moreover, the drivers tend to be well-written and work well with Unix-based operating systems. For a typical pfSense setup, you will need two network interfaces: one for the WAN and one for the LAN. Each additional subnet (for example, for a guest network) will require an additional interface, as will each additional WAN interface. It should be noted that you don't need an additional card for each interface added; you can buy a multiport network card (most such cards have either 2 or 4 ports). You don't need to buy new NICs for your pfSense system; in fact, it is often economical to buy used NICs, and except in rare cases, the performance level will be the same. If you want to incorporate wireless connectivity into your network, you may consider adding a wireless card to your pfSense system. As mentioned earlier, however, the likely better option is to use pfSense in conjunction with a separate wireless access point. If you do decide to add a wireless card to your system and configure it for use as an access point, you will want to check the FreeBSD hardware compatibility list before making a purchase. Using a laptop You might be wondering if using an old laptop as a pfSense router is a good idea. In many respects, laptops are good candidates for being repurposed into routers. They are small, energy efficient, and when the AC power shuts off, they run on battery power, so they have a built-in uninterruptable power supply (UPS). Moreover, many old laptops can be purchased relatively cheaply at thrift shops and online. There is, however, one critical limitation to using a laptop as a router: in almost all cases, they only have one Ethernet port. Moreover, there is often no realistic way to add another NIC: as there are no expansion slots that will take another NIC (some, however, do have PCMCIA slots that will take a second NIC). There are gigabit USB-to-Ethernet adapters (for USB 3.0), but this is not much of a solution. Such adapters do not have the reliability of traditional NICs. Most laptops do not have Intel NICs either; high-end business laptops are usually the exception to this rule. There is a way to use a laptop with a single Ethernet port as a pfSense router, and that is to configure pfSense using VLANs. As mentioned earlier, VLANs, or virtual LANs, allow us to use a single NIC to serve multiple subnets. Thus, we can set up two VLANs on our single port: virtual LAN #1, which we will use for the WAN interface, and virtual LAN #2, which we will use for the LAN interface. The one disadvantage of this setup is that you must use a managed switch to make this work. Managed switches are switches that can usually be configured and managed as groups, they often have both a command-line and web interface for management, and they often have a wide range of capabilities, such as VLANs. Since unmanaged switches forward traffic to all other ports, they are unsuitable for this setup. You can, however, connect an unmanaged switch to the managed switch to add ports. Keep in mind that managed switches are expensive (more expensive than dual and quad port network cards), and if there are multiple VLANs on a single link, this link can easily become overloaded. In scenarios where you can add a network card, this is usually the better option. If you have an existing laptop, however, a managed switch with VLANs is a workable solution. Introduction to VLANs and DNS Two of the areas in which pfSense excels is in incorporating functionality to implement VLANs and DNS servers. First, let's consider why we would want to implement these. Introduction to VLANs The standard way to partition your network is to use a router to pass traffic between networks, and configure a separate switch (or switches) for each network. In this scenario, there is a one-to-one relationship between the number of network interfaces and the number of physical ports. This works well in many network deployments, especially in small networks. As the network gets larger, however, there are issues with this type of configuration. As the number of users on the network increases, we are faced with a choice of either having more users on each subnet, or increasing the number of subnets (and therefore the number of network interfaces on the router). Both solutions also create new problems: Each subnet makes up a separate broadcast domain. Increasing the number of users on a subnet increases the amount of broadcast traffic, which can bog down our network. Each user on a subnet can use a packet sniffer to sniff network traffic, which creates a security problem. Segmenting the network by adding subnets tends to be costly, as each new subnet requires a separate switch. VLANs offer us a way out of this dilemma with relatively little downside. VLANs allow us to divide traffic on a single network interface (for example, LAN) into several separate networks, by adding a special tag to frames entering the network. This tag, known as an 802.1q tag, identifies which VLAN to which the device belongs. Dividing network traffic in such a way offers several advantages: As each VLAN constitutes a separate broadcast domain, broadcast domains are now smaller, and thus there is less network traffic. Users on one VLAN cannot sniff traffic from another VLAN, even if they are on the same physical interface, thus improving security. Using VLANs requires us to have a managed switch on the interface on which VLANs exist. This is somewhat more expensive than an unmanaged switch, but the cost differential between a managed and unmanaged switch is less than it might be if we had to buy additional switches for new subnets. As a result, VLANs are often the most efficient way of making our networks more scalable. Even if your network is small, it might be advantageous to at least consider implementing a VLAN, as you will likely want to make future growth as seamless as possible. Introduction to DNS The DNS provides a means of converting an easy-to-remember domain name with a numerical (IP) address. It thus provides us with a phone book for the Internet as well as providing a structure that is both hierarchical (there is the root node, which covers all domain names, top-level domains like .com and .net, domain names and subdomain names) and decentralized (the Internet is divided into different zones, and a name server is authoritative for a specific zone). In a home or SOHO environment, we might not need to implement our own DNS server. In these scenarios, we could use our ISP's DNS servers to resolve Internet hostnames. For local hostnames, we could rely on NetBIOS under Windows, the Berkeley Internet Name Domain service (BIND) under Linux (using a configuration that does not require us to run name servers), or osx under Mac OS X. Another option for mapping hostnames to IP addresses on the local network would be to use HOSTS.TXT. This is a text file, which contains a list of hostnames and corresponding IP addresses. But there are certain factors that may prompt us to set up our own DNS server for our networks: We may have chosen to utilize HOSTS.TXT for name resolution, but maintaining the HOSTS.TXT file on each of the hosts on our network may prove to be too difficult. If we have roaming clients, it may even be impossible. If your network is hosting resources that are available externally (for example, an FTP server or a website), and you are constantly making changes to the IP addresses of these resources, you will likely find it much easier to update your own data rather than submit forms to third parties and wait for them to implement the changes. Although your DNS server will only be authoritative for your domains, it can cache DNS data from the rest of the Internet. On your local network, this cached data can be retrieved much faster than DNS data from a remote DNS server. Thus, maintaining your own DNS server should result in faster name resolution. If you anticipate ever having to implement a public DNS server, a private DNS server can be a good learning experience, and if you make mistakes in implementing a private DNS server, the consequences are not as far-reaching as they would be with a public one. Implementing a DNS server with pfSense is relatively easy. By using the DNS resolver, we can have pfSense answer DNS queries from local clients, and we can also have pfSense utilize any currently available DNS servers. We can also use third-party packages such as dns-server (which is a pfSense version of TinyDNS) to add DNS server functionality. The best practices for installation and configuration Once you have chosen your hardware and which version you are going to install, you can download pfSense. Browse to the Downloads section of pfsense.org and select the appropriate computer architecture (32-bit, 64-bit, or Netgate ADI), the appropriate platform (Live CD, memstick, or embedded), and you should be presented with a list of mirrors. Choose the closest one for the best performance. You will also want to download the MD5 checksum file in order to verify the integrity of the downloaded image. Windows has several utilities for displaying MD5 hashes for a file. Under BSD and Linux, generating the MD5 hash is as easy as typing the following command: md5 pfSense-LiveCD-2.2.6-RELEASE-amd64.iso This command would generate the MD5 checksum for the 64-bit Live CD version for pfSense 2.2.6. Compare the resulting hash with the contents of the .md5 file downloaded from the pfSense website. If you are doing a full install from the Live CD or memory stick, then you just need to write the ISO to the target media, boot from either the CD or memory stick, perform some basic configuration, and then invoke the installer. The embedded install is done from a compact flash (CF) card and console data can be sent to either a serial port or the VGA port, depending on which embedded configuration you chose. If you use the serial port version, you will need to connect the embedded system to another computer with a null modem cable. Troubleshooting installation In most cases, you should be able to invoke the pfSense installer and begin installing pfSense onto the system. In some cases, however, pfSense may not boot from the target media, or the system may hang during the boot process. If pfSense is not booting at all, you may want to check to make sure the system is set up to boot from the target media. This can be done by changing the boot sequence in the BIOS settings (which can be accessed during system boot, usually by hitting the Delete key). Most computers also have a means of choosing the boot device on a one-time basis during the boot sequence. Check your motherboard's manual on how to do this. If the system is already set up to boot from the target media, then you may want to verify the integrity of the pfSense image again, or repeat the process of writing the images to the target media. The initial pfSense boot menu when booting from a CD or USB flash drive. If the system hangs during the boot process, there are several options you can try. The first menu that appears, as pfSense boots, has several options. The last two options are Kernel and Configure Boot Options. Kernel allows you to select which kernel to boot from among the available kernels. If you have a reason to suspect that the FreeBSD kernel being used is not compatible with your hardware, you might want to switch to the older version. Configure Boot Options launches a menu (shown in the preceding screenshot) with several useful options. A description of these options can be found at: http://www.freebsd.org/doc/handbook/book.html. Toggling [A]CPI Support to off can help in some cases, as ACPI's hardware discovery and configuration capabilities may cause the pfSense boot process to hang. If turning this off doesn't work, you could try booting in Safe [M]ode, and if all else fails, you can toggle [V]erbose mode to On, which will give you detailed messages while booting. The two options after boot are [R]ecovery, and [I]nstaller. The [R]ecovery mode provides a shell prompt and helps recover from a crash by retrieving config.xml from a crashed hard drive. [I]nstaller allows you to install pfSense onto a hard drive or other media, and gets invoked by default after the timeout period. The installer provides you with the option to either do a quick install or a custom install. In most cases, the quick install option can be used. Invoking the custom install option is only recommended if you want to install pfSense on a drive other than the first drive on the target system, or if you want to install multiple operating systems on the system. It is not likely that either of these situations will apply, unless you are installing pfSense for evaluation purposes (and in such cases, you would probably have an easier time running pfSense on a virtual machine). If you were unable to install pfSense on to the target media, you may have to troubleshoot your system and/or installation media. If you are attempting to install from the CD, your optical drive may be malfunctioning, or the CD may be faulty. You may want to start with a known good bootable disc and see if the system will boot off of it. If it can, then your pfSense disc may be at fault; burning the disc again may solve the problem. If, however, your system cannot boot off the known good disc, then the optical drive itself, or the cables connecting the optical drive to the motherboard, may be at fault. In some cases, however, none of the aforementioned possibilities hold true, and it is possible that the FreeBSD boot loader will not work on the target system. If so, then you could opt to install pfSense on a different system. Another possibility is to install pfSense onto a hard drive on a separate system, then transfer the hard drive into the target system. In order to do this, go through the installation process on another system as you would normally until you get to the Assign Interfaces prompt. When the installer asks if you want to assign VLANS, type n. Type exit at the Assign Interfaces prompt to skip the interface assignment. Proceed through the rest of the installation; then power down the system and transfer the hard drive to the target system. Assuming that the pfSense hard drive is in the boot sequence, the system should boot pfSense and detect the system's hardware correctly. Then you should be able to assign network interfaces. The rest of the configuration can then proceed as usual. If you have not encountered any of these problems, the software should be installed on the target system, and you should get a dialog box telling you to remove the CD from the optical drive tray and press Enter. The system will now reboot, and you will be booting into your new pfSense install for the first time. pfSense configuration Configuration takes place in two phases. Some configuration must be done at the console, including interface configuration and interface IP address assignment. Some configuration steps, such as VLAN and DHCP setup, can be done both at the console and within the web GUI. Configuration from the console On boot, you should eventually see a menu identical to the one seen on the CD version, with the boot multi or single user options and other options. After a timeout period, the boot process will continue and you will get an options menu that is also identical to the CD version, except option 99 (installation option) will not be there. You should select 1 from the menu to begin interface assignment. This is where the network cards installed in the system are given their roles as WAN, LAN, and optional interfaces (OPT1, OPT2, and so on). If you select this option, you will be presented with a list of network interfaces. This list provides four pieces of information: pfSense's device name for the interface (fxp0, em1, and so on) The MAC address of the interface The link state of the interface (up if a link is detected; down otherwise) The manufacturer and model of the interface (Intel PRO 1000, for example) As you are probably aware, generally speaking, no two network cards have the same MAC address, so each of the interfaces in your system should have a unique MAC address. To begin the configuration, press 1 and Enter for the Assign Interfaces option. After that, a prompt will show up for VLAN configuration. Otherwise, type n and press Enter. Keep in mind that you can always configure VLANs later on. The interfaces must be configured, and you will be prompted for the WAN interface first. If you only configure one interface, it will be assigned to the WAN, and you will subsequently be able to login to pfSense through this port. This is not what you would normally want, as the WAN port is typically accessible from the other side of the firewall. Once at least one other interface is configured, you will no longer be able to login to pfSense from the WAN port. Unless you are using VLANs, you will have to set up at least two network interfaces. In pfSense, network interfaces are assigned rather cryptic device names (for example, fxp0, em1 and so on) and it is not always easy to know which ports correspond to particular device names. One way of solving this problem is to use the automatic interface assignment feature. To do this, unplug all network cables from the system and then type a and press Enter to begin auto-detection. The WAN interface is the first interface to be detected, so plug a cable into the port you intend to be the WAN interface. The process is repeated with each successive interface. The LAN interface is configured next, then each of the optional interfaces (OPT1, OPT2). If auto-detection does not work, or you do not want to use it, you can always choose manual configuration. You can always reassign network interfaces later on, so even if you make a mistake on this step, the mistake can be easily fixed. Once you have finished configuration, type y at the Do you want to proceed? prompt, or type n and enter to re-assign the interfaces. Option two on the menu is Set interface(s) IP address, and you will likely want to complete this step as well. When you invoke this option, you will be prompted to specify which interface's IP address is to be set. If you select WAN interface, you will be asked if you want to configure the IP address via DHCP. In most scenarios, this is probably the option you want to choose, especially if pfSense is acting as a firewall. In that case, the WAN interface will receive an IP address from your ISP's DHCP server. For all other interfaces (or if you choose not to use DHCP on the WAN interface), you will be prompted to enter the interface's IPv4 address. The next prompt will ask you for the subnet bit count. In most cases, you'll want to enter 8 if you are using a Class A private address, 16 for Class B, and 24 for Class C, but if you are using classless subnetting (for example, to divide a Class C network into two separate networks), then you will want to set the bit count accordingly. You will also be prompted for the IPv4 gateway address (any interface with a gateway set is a WAN, and pfSense supports multiple WANs); if you are not configuring the WAN interface(s), you can just hit Enter here. Next, you will be prompted to provide the address, subnet bit count, and gateway address for IPv6; if you want your network to fully utilize IPv6 addresses, you should enter them here. We have now configured as much as we need to from the console (actually, we have done more than we have to, since we really only have to configure the WAN interface from the console). The remainder of the configuration can be done from the pfSense web GUI. Configuration from the web GUI The pfSense web GUI can only be accessed from another PC. If the WAN was the only interface assigned during the initial setup, then you will be able to access pfSense through the WAN IP address. Once one of the local interfaces is configured (typically the LAN interface), pfSense can no longer be accessed through the WAN interface. You will, however, be able to access pfSense from the local side of the firewall (typically through the LAN interface). In either case, you can access the web GUI by connecting another computer to the pfSense system, either directly (with a crossover cable) or indirectly (through a switch), and then typing either the WAN or LAN IP address into the connected computer's web browser. The login screen should look similar to the following screenshot: The pfSense 2.3 web GUI login screen. When you initially log in to pfSense, the default username/password combination will be admin/pfsense respectively. On your first login, Setup Wizard will begin automatically. Click on the Next button to begin configuration. The first screen provides a link for information about a pfSense Gold subscription. You can click on the link to sign up, or click on the Next button. On the next screen, you will be prompted to enter the hostname of the router as well as the domain. Hostnames can contain letters, numbers and hyphens, but must begin with a letter. If you have a domain, you can enter it in the appropriate field. In the Primary DNS Server and Secondary DNS Server fields, you can enter your DNS servers. If you are using DHCP for your WAN, you can probably leave these fields blank, as they will usually be assigned automatically by your ISP. If you have alternate DNS servers you wish to use, you can enter them here. I have entered 8.8.8.8 and 8.8.4.4 as the primary and secondary DNS servers (these are two DNS servers run by Google that conveniently have easy to remember IP addresses). You can keep the Override DNS checkbox checked unless you have reason to use DNS servers other than the ones assigned by your ISP. Click on Next when finished. The next screen will prompt you for the Network Time Protocol (NTP) server as well as the local time zone. You can keep the default value for the server hostname for now. For the Timezone field, you should select the zone which matches your location and click on Next. The next screen of the wizard is the WAN configuration page. You will be prompted to select the WAN type. You can select either DHCP (the default type) or Static. If your pfSense system is behind another firewall and it is not going to receive an IP address from an upstream DHCP server, then you probably should choose Static. If pfSense is going to be a perimeter firewall, however, then DHCP is likely the correct setting, since your ISP will probably dynamically assign an IP address (this is not always the case, as you may have an IP address statically assigned to you by your ISP, but it is the more likely scenario). If you are not sure which WAN type to use, you will need to obtain this information from your ISP (the other choices are PPPoE, PPTP, and Static. PPPoE stands for Point-to-Point over Ethernet and PPTP stands for Point-to-Point Tunneling Protocol). The MAC address field allows you to enter a MAC address that is different from the actual MAC address of the WAN interface. This can be useful if your ISP will not recognize an interface with a different MAC address than the device that was previously connected, or if you want to acquire a different IP address (changing the MAC address will cause the upstream DHCP server to assign a different address). If you use this option, make sure the portion of the address reserved for the Organizationally Unique Identifier (OUI) is a valid OUI – in other words, an OUI assigned to a network card manufacturer. (The OUI portion of the address is the first three bytes of a MAC-48 address and the first five bytes of an EUI-48 address.) The next few fields can usually be left blank. Maximum Transmission Unit (MTU) allows you to change the MTU size if necessary. DHCP hostname allows you to send a hostname to your ISP when making a DHCP request, which is useful if your ISP requires this. Besides DHCP and Static, you can select PPTP or PPPoE as your WAN type. If you choose PPPoE, then there will be a field for a PPPoE Username, PPPoE Password, and PPPoE Server Name. The PPPoE dial-on-demand checkbox allows you to connect to your ISP only when a user requests data that requires an Internet connection. PPPoE Idle timeout specifies how long the connection will be kept open after transmitting data when this option is invoked. The Block RFC1918 Private Networks checkbox, if checked, will block registered private networks (as defined by RFC 1918) from connecting to the WAN interface. The Block Bogon Networks option blocks traffic from reserved and/or unassigned IP addresses. For the WAN interface, you should check both options unless you have special reasons for not invoking these options. Click the Next button when you are done. The next screen provides fields in which you can change LAN IP address and subnet mask, but only if you configured the LAN interface previously. You can keep the default, or change it to another value within the private address blocks. You may want to choose an address range other than the very common 192.168.1.x in order to avoid a conflict. Be aware that if you change the LAN IP address value, you will also need to adjust your PC's IP address, or release and renew its DHCP lease when finished with the network interface. You will also have to change the pfSense IP address in your browser to reflect the change. The final screen of the pfSense Setup Wizard allows you to change the admin password, which you should probably do. Enter the password, enter it again for confirmation in the next edit box, and click on Next. On the following screen, there will be a Reload button; click on Reload. This will reload pfSense with the new changes. Once you have completed the wizard, you should have network connectivity. Although there are other means of making changes to pfSense's configuration, if you want to repeat the wizard, you can do so by navigating to System | Setup Wizard. Completion of the wizard will take you to the pfSense dashboard. The pfSense dashboard, redesigned for version 2.3. Configuring additional interfaces By now, both the WAN and LAN interface configuration should be complete. Although additional interface configuration can be done at the console, it can also be done in the web GUI. To add optional interfaces, navigate to Interfaces | assign theInterface assignments tab will show a list of assigned interfaces, and at the bottom of the table, there will be an Available network ports: entry option. There will be a corresponding drop-down box with a list of unassigned network ports. These will have device names such as fxp0, em1, and so on. To assign an unused port, select the port you want to assign from the drop-down box, and click on the + button to the right. The page will reload, and the new interface will be the last entry in the table. The name of the interface will be OPTx, where x equals the number of optional interfaces. By clicking on interface name, you can configure the interface. Nearly all the settings here are similar to the settings that were available on the WAN and LAN configuration pages in the pfSense Setup Wizard. Some of the options under the General Configuration section, that are not available in the setup wizard, are MSS (Maximum Segment Size), and Speed and duplex. Normally, MSS should remain unchanged, although you can change this setting if your Internet connection requires it. If you click on the Advanced button under Speed and duplex, a drop-down box will appear in which you can explicitly set the speed and duplex for the interface. Since virtually all modern network hardware has the capability of automatically selecting the correct speed and duplex, you will probably want to leave this unchanged. If you have selected DHCP as the configuration type, then there are several options in addition to the ones available in the setup wizard. Alias IPv4 address allows you to enter a fixed IP address for the DHCP client. The Reject Leases from field allows you to specify the IP address or subnet of an upstream DHCP server to be ignored. Clicking on the Advanced checkbox in the DHCP client configuration causes several additional options to appear in this section of the page. The first is Protocol Timing, which allows you to control DHCP protocol timings when requesting a lease. You can also choose several presets (FreeBSD, pfSense, Clear, or Saved Cfg) using the radio buttons on the right. The next option in this section is Lease Requirements and Requests. Here you can specify send, request, and require options when requesting a DHCP lease. These options are useful if your ISP requires these options. The last section is Option Modifiers, where you can add DHCP option modifiers, which are applied to an obtained DHCP lease. There is a second checkbox at the top of this section called Config File Override. Checking this box allows you to enter a DHCP client configuration file. If you use this option, you must specify the full absolute path of the file. Starting with pfSense version 2.2.5, there is support for IPv6 with DHCP (DHCP6). If you are running 2.2.5 or above, there will be a section on the page called DHCP6 client configuration. The first setting is Use IPv4 connectivity as parent interface. This allows you to request an IPv6 address over IPv4. The second is Request only an IPv6 prefix. This is useful if your ISP supports Stateless Address Auto Configuration(SLAAC). In this case, instead of the usual procedure in which the DHCP server assigns an IP address to the client, the DHCP server only sends a prefix, and the host may generate its own IP address and test the uniqueness of a generated address in the intended addressing scope. By default, the IPv6 prefix is 64 bits, but you can change that by altering the DHCPv6 Prefix Delegation size in the corresponding drop-down box. The last setting is the Send IPv6 prefix hint, which allows you to request the specified prefix size from your ISP. The advanced DHCP6 client configuration section of the interface configuration page. This section appears if DHCP6 is selected as the IPv6 configuration type. Checking the Advanced checkbox in the heading of this section displays the advanced DHCP 6 options. If you check the Information Only checkbox on the left, pfSense will send requests for stateless DHCPv6 information. You can specify send and request options, just as you can for IPv4. There is also a Script field where you can enter the absolute path to a script that will be invoked on certain conditions. The next options are for the Identity Association Statement checkboxes. The Non-Temporary Address Allocation checkbox results in normal, that is, not temporary, IPv6 addresses to be allocated for the interface. The Prefix Delegation checkbox causes a set of IPv6 prefixes to be allocated from the DHCP server. The next set of options, Authentication Statement, allow you to specify authentication parameters to the DHCP server. The Authname parameter allows you to specify a string, which in turn specifies a set of parameters. The remaining parameters are of limited usefulness in configuring a DHCP6 client, because each has only one allowed value, and leaving them blank will result in only the allowed value being used. If you are curious as to what these values are, here they are: Parameter Allowed value Description Protocol Delayed The DHCPv6 delayed authentication protocol Algorithm hmac-md5, HMAC-MD5, hmacmd5, or HMACMD5 The HMAC-MD5 authentication algorithm rdm Monocounter The replay protection method; only monocounter is available Finally, Key info Statement allows you to enter a secret key. The required fields are key id, which identifies the key, and secret, which provides the shared secret. key name and realm are arbitrary strings and may be omitted. expire may be used to specify an expiration time for the key, but if it is omitted, the key will never expire. The last section on the page is identical to the interface configuration page in the Setup Wizard, and contains the Block Private Networks and Block Bogon Networks checkboxes. Normally, these are checked for WAN interfaces, but not for other interfaces. General setup options You can find several configuration options under System | General Setup. Most of these are identical to settings that can be configured in the Setup Wizard (Hostname, Domain, DNS servers, Timezone, and NTP server). There are two additional settings available. The Language drop-down box allows you to select the web configurator language. Under the Web Configurator section, there is a Theme drop-down box that allows you to select the theme. The default theme of pfSense is perfectly adequate, but you can select another one here. pfSense 2.3 also adds new options to control the look and feel of the web interface; these settings are also found in the Web Configurator section of the General Settings page. The Top Navigation drop-down box allows you to choose whether the top navigation scrolls with the page, or remains anchored at the top as you scroll. The Dashboard Columns option allows you to select the number of columns on the dashboard page (the default is 2). The next set of options is Associated Panels Show/Hide. These options control the appearance of certain panels on the Dashboard and System Logs page. The options are: Available Widgets: Checking this box causes the Available Widgets panel to appear on the Dashboard. Prior to version 2.3, the Available Widgets panel was always visible on the Dashboard. Log Filter: Checking this box causes the Advanced Log Filter panel to appear on the System Logs page. Advanced Log Filter allows you to filter the system logs by time, process, PID and message. Manage Log: Checking this box causes the Manage General Log panel to appear on the System Logs page. The Manage General Log panel allows you to control the display of the logs, how big the log file may be, and the formatting of the log file, among other things. The last option on this page, Left Column Labels, allows you to select/toggle the first item in a group by clicking on the left column if checked. Click on Save at the bottom of the page to save any changes. Advanced setup options Under System | Advanced, there are a number of options that you will probably want to configure before completing the initial setup. There are six separate tabs here, all with multiple options, and we won't cover all of them here, but we will cover the more common ones. The first setting allows you to choose between HTTP and HTTPS for the web configurator. If you plan on making the pfSense web GUI accessible from the WAN side, you will definitely want to choose HTTPS in order to encrypt access to the web GUI. Even if the web GUI will only be accessible over local networks, you probably will want to choose HTTPS. Modern web browsers will complain about the SSL certificate the first time you access the web GUI, but most of them will allow you to create an exception. The next setting, SSL certificate, allows you to choose a certificate from a drop-down list of available certificates. You can choose web Configurator default, or you can add another certificate (by navigating to System | Cert Manager and adding one), and use it instead. The next important setting, also in the Web Configurator section, is the Disable web Configurator anti-lockout rule. If left unchecked, access to the web GUI is always allowed on the LAN (or WAN if the LAN interface has not been assigned), regardless of any user-defined firewall rules. If you check this option and you don't have a user-defined rule to allow access to pfSense, you will lock yourself out of the web GUI. If you are locked out of the web GUI because of firewall rules, there are several options. The easiest option is probably to restore a previous configuration from the console. You can also reset pfSense to factory defaults, but if you don't mind typing in shell commands, there are less drastic options. One is to add an allow all rule on the WAN interface by typing the following command at the console shell prompt (type 8 at the console menu to invoke the shell): pfSsh.php playback enableallowallwan Once you issue this command, you will be able to access the web GUI through the WAN interface. To do so, either connect the WAN port to a network running DHCP (if the WAN uses DHCP), or connect the WAN port to another computer with an IP on the same network (if the WAN has a static IP). Be sure to delete the WAN allow all rule before deploying the system. Another possibility is to temporarily disable the firewall rules with the following shell command: pfctl –d Once you have regained access, you can re-enable the firewall rules with this command: pfctl -e In any case, you want to make sure your firewall rules are configured correctly before invoking the anti-lockout option. You can reset pfSense to factory defaults by selecting 4 from the console menu. If you need to go back to a previous configuration, you can do that by selecting 15 from the console menu; this option will allow you to select from automatically-saved restore points. The next section is Secure Shell; checking the Enable Secure Shell checkbox makes the console accessible via a Secure Shell (SSH) connection. This makes life easier for admins, but it also creates a security concern. Therefore, it is a good idea to change the default SSH port (the default is 22), which you can do in this section. You can add another layer of security by checking the Disable password login for the Secure Shell checkbox. If you invoke this option, you must create authorized SSH keys for each user that requires SSH access. The process for generating SSH keys is different depending on your OS. Under Linux, it is fairly simple. First, enter the following at the command prompt: ssh-keygen –t rsa You will receive the following prompt: Enter file in which to save the key (/home/user/.ssh/id-rsa): The directory in parenthesis will be a subdirectory of your home directory. You can change the directory or press Enter. The next prompt asks you for a passphrase: Enter passphrase (empty for no passphrase): You can enter a passphrase here or just press Enter. You will be prompted to enter the passphrase again, and then the public/private key pair will be generated. The public key will now be saved in a file called id-rsa.pub. Entering SSH keys for a user in the user manager. The next step is adding the newly generated public key to the admin account in pfSense. Open the file in the text editor of your choice and in the web GUI, select the public key and copy it to the clipboard. Then navigate to System | User Manager and click on the Edit user icon for the appropriate user. Scroll down to the Keys section and paste the key into the Authorized SSH keys box. Then click on Save at the bottom of the page. You should now be able to SSH into the admin account without entering the password. Type the following at the command line: ssh pfsense_address –ladmin Here pfsense_address is the IP address of the pfSense system. If you specified a passphrase earlier, you will be prompted to enter it in order to unlock the private key. You will not be prompted for the passphrase on subsequent logins. Once you unlock the private key, you should be logged into the console. The last section of the page, Console options, gives you one more layer of security by allowing you to require a password for console login. Check this checkbox if you want to enable this option, although this could result in being locked out if you forget the password. If this happens, you may still be able to restore access by booting from the live CD and doing a pre-flight install, described in a subsequent section. The next tab, Firewall/NAT, contains a number of important settings relating to pfSense's firewall functionality. Firewall Optimization Options allows you to select the optimization algorithm for the state table. The Normal option is designed for average case scenario network usage. High latency, as the name implies, is for connections in which it is expected that there will be a significant delay between a request and response (a satellite connection is a good example). Aggressive and Conservative are inverses of each other. Aggressive is more aggressive than Normal in dropping idle connections, while Conservative will leave idle connections open longer than Normal would. Obviously, the trade-off here is that if we expire idle connections too soon, legitimate connections may be dropped, while keeping them open too long will be costly from a resource (CPU and memory) standpoint. In the Firewall Advanced section, there is a Disable all packet filtering checkbox. Enabling this option disables all firewall functionality, including NAT. This should be used with caution, but may be useful in troubleshooting. The Firewall maximum settings and Firewall maximum table entries options allow you to specify the maximum number of connections and maximum number of table entries respectively to hold in the system state table. If you leave these entries blank, pfSense will assign reasonable defaults based on the amount of memory your system has. Since increasing the maximum number of connections and/or state table entries will leave less memory for everything else, you will want to invoke these options with caution. The static route filtering checkbox, if checked, will result in firewall rules not taking effect for traffic that enters and leaves through the same interface. This can be useful if you have a static route in which traffic enters pfSense through an interface, but the source of the traffic is not the same as the interface on which it enters. This option does not apply to traffic whose source and destination is the same interface – such traffic is intra-network traffic, and firewall rules would not apply to it whether or not this option was invoked. The next section of the page, Bogon Networks, allows you to select the update frequency of the list of addresses reserved, or not yet assigned, by IANA. If someone is trying to access your network from a newly-assigned IP address, but the Bogon networks list has not yet been updated, they may find themselves blocked. If this is happening on a frequent basis, you may want to change the update frequency. The next tab, Networking, contains a number of IPv6 options. The Allow IPv6 checkbox must be checked in order for IPv6 traffic to pass (it is checked by default). The next option, IPv6 over IPv4 Tunneling, allows you to enable the transitional IPv6 over IPv4. There is also an option called Prefer IPv4 even when IPv6 is available, which will cause IPv4 to be used in cases where a hostname resolves both IPv4 and IPv6 addresses. The next tab is called Miscellaneous. The Proxy Port section allows you to specify URL for a remote proxy server, as well as the proxy port as well as a username and password. The following section, Load Balancing, has two settings. The first setting, Use sticky connections, causes successive connections from the same source to be connected to the same server, instead of directing them to the next web server in the pool, which would be the normal behavior. The timeout period for sticky connections may be adjusted in the adjacent Edit box. The default is 0, so the sticky connection expires as soon as the last connection from the source expires. The second setting, Enable default gateway switching, switches from the default gateway to another available one when the default gateway goes down. This is not necessary in most cases, since it is easier to incorporate redundancy into gateways with gateway groups. The Scheduling section has only one option, but it has significance if you use rule scheduling. Checking the Do not kill connections when schedule expires checkbox will cause connections permitted by the rule to survive even after the time period specified by the schedule expires. Otherwise, pfSense will kill all existing connections when a schedule expires. Upgrading, backing up, and restoring pfSense You can usually upgrade pfSense from one version to another, although the means of upgrading may differ depending on what platform you are using. So long as the firmware is moving from an older version to a newer version, pfSense will work unless otherwise noted. Before you make any changes, you should make an up-to-date backup. In the web GUI, you can back up the configuration by navigating to Diagnostics | Backup/Restore. In the Backup Configuration section of the page, set Backup Area to ALL. Then click on Download Configuration and save the file. Before you upgrade pfSense, it is a good idea to have a plan on how to recover in case the upgrade goes wrong. There is always a chance that an upgrade will leave pfSense in an unusable state. In these cases, it is always helpful to have a backup system available. Also, with advance planning, the firewall can be quickly returned to the previous release. There are three methods for upgrading pfSense. The first is to download the upgrade binaries from the official pfSense site. The same options are available as are available for a full install. Just download the appropriate image, write the image to the target media, and boot the system to be upgraded from the target media. For embedded systems, releases prior to 1.2.3 are not upgradable (in such cases, a full install would be the only way to upgrade), but newer NanoBSD-based embedded images do support upgrades. The second method is to upgrade from the console. From the console menu, select 13 (the Upgrade from Console option). pfSense will check the repositories to see if there is an update, and if there is, how much more disk space is required, and also inform you that upgrading will require a reboot. It will also prompt you to confirm that the upgrade should proceed. Type y and Enter, and the upgrade will proceed. pfSense will also automatically reboot 10 seconds after downloading and installing the upgrade. Rebooting may take slightly longer than it would normally, since pfSense must extract the new binaries from a tarball during the boot sequence. Upgrading pfSense from the console. The third method is the easiest way to upgrade your system: from the web GUI. Navigate to Status | Dashboard (this should also be the screen you see when initially logging into the web GUI). The System Information widget should have a section called Version, and this section should provide: The current version of pfSense Whether an update is available If an update is available, there will be a link to the firmware auto update page; click on this link. (Alternatively, you can access this page by navigating to System | Update and clicking on the System Update tab (note that on versions prior to 2.3, this menu option was called Firmware instead of Update.) If there is an update available, this page will let you know. Choosing a firmware branch from the Update Settings tab of the Update option. The Update Settings tab contains options that may be helpful in some situations. The Firmware Branch section has a drop-down box, allowing you to select either the Stable branch or Development branch. The Dashboard check checkbox allows you to disable the dashboard auto-update check. Once you are satisfied with these settings, you can click on the Confirm button on the System Update tab. The updating process will then begin, starting with the backup (if you chose that option). Upgrading can take as little as 15 minutes, especially if you are upgrading from one minor version to another. If you are upgrading in a production environment, you will want to schedule your upgrade for a suitable time (either during the weekend or after normal working hours). The web GUI will keep you informed of the status of the update process and when it is complete. Another means of updating pfSense in the web GUI is to use the manual update feature. To do so, navigate to System | Update and click on the Manual Update tab. Click on the Enable firmware upload button. When you do this, a new section should appear on the page. The Choose file button launches a file dialog box where you can specify the firmware image file. Once you select the file, click on Open to close out the file dialog box. There is a Perform full backup prior to upgrade checkbox you can check if you want to back up the system, and also an Upgrade firmware button that will start the upgrade process. If the update is successful, the System Information widget on the Dashboard should indicate that you are on the current version of pfSense (or the version to which you upgraded, if you invoked the manual update). If something went wrong and pfSense is not functioning properly, and you made a backup prior to updating, you can restore the old version. Available methods of backing up and restoring pfSense are outlined in the next section. Backing up and restoring pfSense The following screenshot shows the options related to backing up and restoring pfSense: Backup and restore options in pfSense 2.3. You can back up and restore the config.xml file from the web GUI by navigating to Diagnostics | Backup/Restore. The first section, Backup configuration, allows you to back up some or all of the configuration data. There is a drop-down box which allows you to select which areas to backup. There are checkbox options such as do not backup package information, and Encrypt this configuration file. The final checkbox, selected by default, allows you to disable the backup of round robin database (RRD) data, real-time traffic data which you likely will not want to save. The Download Configuration as XML button allows you to save config.xml to a local drive. Restoring the configuration is just as easy. In the Restore configuration section of the page, select the area to restore from the drop-down box and browse to the file by clicking on the Choose File button. Specify whether config.xml is encrypted with the corresponding checkbox, and then click the Restore configuration button. Restoring a configuration with Pre-Flight Install You may find it is necessary to restore an old pfSense configuration. Moreover, it is possible that restoring an old configuration from the console or web GUI as described previously in this article is not possible. In these cases, there is one more possible way of restoring an old configuration, and that is with a Pre-Flight Install (PFI), A PFI essentially involves the following: Copying a backup config.xml file into a directory called conf on a DOS/FAT formatted USB drive. Plugging the USB drive into the system whose configuration is to be restored, and then booting off the Live CD. Installing pfSense from the CD onto the target system. Rebooting the system, and allowing pfSense to boot (off the target media, not the CD). The configuration should now be restored. Another option that is useful if you want to retain your configuration while reinstalling pfSense is to choose the menu option Rescue config.xml during the installation process. This allows you to select and load a configuration file from any storage media attached to the system. Summary The goal of this article was to provide an overview of how to get pfSense up and running. Completion of this article should give you an idea of where to deploy your pfSense system as well as what hardware to utilize. You should also know how to troubleshoot the most common installation problems, and how to do basic system configuration and interface setup for both IPv4 and IPv6 networks. You should know how to configure pfSense for remote access. Finally, you should know how to upgrade, backup, and restore pfSense. Resources for Article: Further resources on this subject: Configuring the essential networking services provided by pfSense [article] pfSense: Configuring NAT and Firewall Rules [article] Upgrading a Home Network to a Small Business System Using pfSense [article]
Read more
  • 0
  • 0
  • 9379

article-image-overview-horizon-view-architecture-and-components
Packt
06 Sep 2016
18 min read
Save for later

An Overview of Horizon View Architecture and Components

Packt
06 Sep 2016
18 min read
In this article by Peter von Oven, author of the book Mastering VMware Horizon 7 - Second Edition, we will introduce you to the architecture and infrastructure components that make up the core VMware Horizon solution, concentrating on the virtual desktop elements of Horizon with Horizon Standard edition, plus the Instant Clone technology that is available in the Horizon Enterprise edition. We are going to concentrate on the core Horizon View functionality of brokering virtual desktop machines that are hosted on a VMware vSphere platform. Throughout the sections of this article we will discuss the role of each of the Horizon View components, explaining how they fit into the overall infrastructure, their role, and the benefits they bring. Once we have explained the high-level concept, we will then take a deeper dive into how that particular component works. As we work through the sections we will also highlight some of the best practices as well as useful hints and tips along the way. We will also cover some of the third-party technologies that integrate and compliment Horizon View, such as antivirus solutions, storage acceleration technologies, and high-end graphics solutions that help deliver a complete end-to-end solution. After reading this article, you will be able to describe each of the components and what part they play within the solution, and why you would use them. (For more resources related to this topic, see here.) Introducing the key Horizon components To start with, we are going to introduce, at a high level, the core infrastructure components and the architecture that make up the Horizon View product. We will start with the high-level architecture, as shown in the following diagram, before going on to drill down into each part in greater detail. All of the VMware Horizon components described are included as part of the licensed product, and the features that are available to you depend on whether you have the Standard Edition, the Advanced Edition, or the Enterprise Edition. It’s also worth remembering that Horizon licensing also includes ESXi and vCenter licensing to support the abilityto deploy the core hosting infrastructure. You can deploy as many ESXi hosts andvCenter servers as you require to host the desktop infrastructure. High-level architectural overview In this section, we will cover the core Horizon View features and functionality for brokeringvirtual desktop machines that are hosted on the VMware vSphere platform. The Horizon View architecture is pretty straightforward to understand, as its foundations lie in the standard VMware vSphere products (ESXi and vCenter). So, if you have the necessary skills and experience of working with this platform, then you are already nearly halfway there. Horizon View builds on the vSphere infrastructure, taking advantage of some of the features of the ESXi hypervisor and vCenter Server. Horizon View requires adding a number of virtual machines to perform the various View roles and functions. An overview of the View architecture for delivery virtual desktops is shown in the following diagram: View components run as applications that are installed on the Microsoft WindowsServer operating system,with the exception of the Access Point which is a hardened Linux appliance, so they could actually run on physical hardware as well.However, there are a great number of benefits available when you run them asvirtual machines, such as delivering HA and DR, as well as the typical cost savingsthat can be achieved through virtualization. The following sections will cover each of these roles/components of the Viewarchitecture in greater detail, starting with the Horizon View Connection Server. The Horizon View Connection Server The Horizon View Connection Server, sometimes referred to as Connection Broker or View Manager, is the central component of the View infrastructure. Its primary role is to connect a user to their virtual desktop by means of performing user authentication and then delivering the appropriate desktop resources based on the user's profile and user entitlement. When logging on to your virtual desktop, it is the Connection Server that you are communicating with How does the Connection Server work? A user will typically connect to their virtual desktop machine from their end point device by launching the View Client, but equally they could use browser-based access. So how does the login process work? Once the View Client has launched (shown as 1 in the diagram on the following page), the user enters the address details of the View Connection Server, which in turn responds (2) by asking them to provide their network login details (their Active Directory (AD) domain username and password). It's worth noting that Horizon View now supports the following different AD Domain functionallevels: Windows Server 2003 Windows Server 2008 and 2008 R2 Windows Server 2012 and 2012 R2 Based on the user’s entitlements, these credentials are authenticated with AD (3) and, if successful, the user is able to continue the logon process. Depending on what they are entitled to, the user could see a launch screen that displays a number of different virtual desktop machine icons that are available for them to login to. These desktop icons represent the desktop poolsthat the user has been entitled to use. A pool is basically a collection of like virtual desktop machines; for example, it could be a pool for the marketing department where the virtual desktop machines contain specific applications/software for that department. Once authenticated, the View Manager or Connection Server makes a call to the vCenter Server (4) to create avirtual desktop machine and then vCenter makes a call (5) to either View Composer (if you areusing linked clones) or will create an Instant Clone using the VM Fork feature of vSphere to start the build process of the virtual desktop if there is not onealready available for the user to login to. When the build process has completed, and the virtual desktop machine is available to the end user,it is displayed/delivered within the View Client window (6) using the chosen display protocol (PCoIP, Blast, or RDP). This process is described pictorially in the following diagram: There are other ways to deploy VDI solutions that do not require a connection broker, although you could argue that strictly speaking this is not a true VDI solution. This is actually what the first VDI solutions looked like, and just allowed a user to connect directly to their own virtual desktop via RDP. If you think about it there are actually some specific use cases for doing just this. For example, if you have a large number of remote branches or offices, you could deploy local infrastructure allowingusers to continue working in the event of a WAN outageor poor network communication between the branch and head office. The infrastructure required would be a sub-set of what you deploy centrally in order to keep costs minimal. It just so happens that VMware have also thought of this use case and have a solution that’s referred to as a Brokerless View, which uses the VMware Horizon View Agent Direct-Connection Plugin. However, don't forget that, in a Horizon View environment, the View Connection Server provides greater functionality and does much more than just connecting users to desktops, as we will see later in this article. As we previously touched on, the Horizon View Connection Server runs as an application on a Windows Server which could be either be a physical or a virtual machine. Running as a virtual machine has many advantages; for example, it means that you can easily add high-availability features, which are critical in this environment, as you could potentially have hundreds or maybe even thousands of virtual desktop machines running on a single host server. Along with brokering the connections between the users and virtual desktop machines, the Connection Server also works with vCenter Server to manage the virtual desktop machines. For example, when using Linked Clones or Instant Clones and powering on virtual desktops, these tasks are initiated by the Connection Server, but they are executed at the vCenter Server level. Now that we have covered what the Connection Server is and how it works, in the next section we are going to look at the requirements you need for it to run. Minimum requirements for the Connection Server To install the View Connection Server, you need to meet the following minimum requirements to run on physical or virtual machines: Hardware requirements: The following table shows the hardware required: Supported operating systems: The View Connection Server must be installed on one of the following operating systems listed in the table below: In the next section we are going to look at the Horizon View Security Server. The Horizon View Security Server The Horizon View Security Server is another component in the architecture and is essentially another version of the View Connection Server but, this time, it sits within your DMZ so that you can allow end users to securely connect to their virtual desktop machine from an external network or the Internet. How does the Security Server work? To start with, the user login process at the beginning is the same as when connecting to a View Connection Server, essentially because the Security Server is just another version of the Connection Server running a subset of the features. The difference being is that you connect to the address of the Security Server.The Security Server sits inside your DMZ and communicates with a Connection Server sitting on the internal network that it ispaired with. So now we have added an extra security layer as the internal Connection Server is not exposed externally, with theidea being that users can now access their virtual desktop machines externally withoutneeding to first connect to a VPN on the network first.The Security Server should not be joined to the Domain. This process is described pictorially in the following diagram: We mentioned previously that the Security Server is paired with a Connection Servers. The pairing is configured by the use of a one-time password during installation. It's a bit like pairing your smart phone with the hands-free kit in your car using Bluetooth. When the user logs in from the View Client, they now use the external URL of the Security Server to access the Connection Server, which in turn authenticates the user against AD. If the Connection Server is configured as a PCoIP gateway, then it will pass the connection and addressing information to the View Client. This connection information will allow the View Client to connect to the Security Server using PCoIP. This is shown in the diagram by the green arrow (1). The Security Server will then forward the PCoIP connection to the virtual desktop machine, (2) creating the connection for the user. The virtual desktop machine is displayed/delivered within the View Client window (3) using the chosen display protocol (PCoIP, Blast, or RDP). The Horizon View Replica Server The Horizon View Replica Server, as the name suggests, is a replica or copy of a View Connection Server and serves two key purposes. The first is that it is used to enable high availability to your HorizonView environment. Having a replica of your View Connection Server means that, if the Connection Server fails, users are still able to connect to their virtualdesktop machines. Secondly, adding Replica Servers allows you to scale up the number of users and virtual desktop connections. An individual instance of a Connection Server can support 2000 connections, so by adding additional Connection Servers allows you to add another 2000 users at a time, up to the maximum of five connection servers and 10,000 users per Horizon View Pod. When deploying a Replica Server, you will need to change the IP address or update the DNSrecord to match this server if you are not using a load balancer. How does the Replica Server work? So, the first question is, what actually gets replicated? The Connection Broker stores all its information relating to the end users, desktop pools, virtual desktopmachines, and other View-related objects, in an Active Directory Application Mode(ADAM) database. Then, using the Lightweight Directory Access Protocol (LDAP) (it uses a method similar to the one AD uses for replication), this View information getscopied from the original Connection Server to the Replica Server. As both, the Connection Server and the Replica Server are now identical to each other, if your Connection Server fails, then you essentially have a backup that steps in and takes over so that end users can still continue to connect to their virtual desktop machines. Just like with the other components, you cannot install the Replica Server role on the same machine that is running as a Connection Server or any of the other Horizon View components. The Horizon View Enrollment Server and True SSO The Horizon View Enrollment Server is the final component that is part of the Horizon View Connection Server installation options, and is selected from the drop-down menu from the installation options screen.So what does the Enrollment Server do? Horizon 7 sees the introduction to a new feature called True SSO. True SSO is a solution that allows a user to authenticate to a Microsoft Windows environment without them having to enter their AD credentials.It integrates into another VMware product, VMware Identity Manager which forms part of both Horizon 7 Advanced and Enterprise Editions. Its job is to sit between the Connection Server and the Microsoft Certificate Authority and to request temporary certificates from the certificate store. This process is described pictorially in the following diagram: A user first logs in to VMware Identity Manager either using their credentials or other authentication methods such as smartcards or biometric devices. Once successfully authenticated, the user will be presented with the virtual desktop machines or hosted applications that they are entitled to use. They can launch any of these by simply double clicking, which by doing so will launch the Horizon View Client as shown by the red arrow (1) in the previous diagram. The user’s credentials will then be passed to the Connection Server (2) which in turn will verify them by sending a Security Assertion Markup Language (SAML)assertion back to the Identity Manager (3). If the user’s credentials are verified, then the Connection Server passes them on to the Enrollment Server (4). The Enrollment Server then makes a request to the Microsoft Certificate Authority (CA) to generate a short-lived, temporary certificate for that user to use (5). With the certificate now generated, the Connection Server presents it to the operating system of the virtual desktop machine (6), which in turn validates with Active Directory as to whether or not the certificate is authentic (7). When the certificate has been authenticated then the user is logged on to theirvirtual desktop machine which will be displayed/delivered to the View Client using the chosen display protocol (8). True SSO is supported with all Horizon 7 supported desktop operating systems for desktops as well Windows Server 2008 R2 and Windows Server 2012 R2. It also supports PCoIP, HTML, and Blast Extreme delivery protocols. VMware Access Point VMware Access Point performs exactly the same functionality as the View Security Server, as shown in the following diagram, however with one key difference. Instead of being a Windows application and another role of the Connection Server, the Access Point is a separate virtual appliance that runs a hardened, locked-down Linux operating system. Although the Access Point appliance delivers pretty much the same functionality as the Security Server, it does not yet completely replace it. Especially if you already have a production deployment that uses the Security Server for external access. You can continue to use this architecture. If you are using the secure tunnel function, PCoIP Secure Gateway, or the Blast Secure Gateway features of the Connection Server, then these features will need to be disabled on the Connection Server if you are using the Access Point. They are all enabled by default on the Access Point appliance. A key difference between the Access Point appliance and the Security Server is in the way it scales. Before you had to pair a Security Server with a Connection Server, which was a limitation, but this is now no longer the case. As such you can now scale to as many Access Point appliances as you need for your environment, with the maximum limit being around 2000 sessions for a single appliance. Adding additional appliances is simply a case of deploying the appliance as appliances don’t depend on other appliances and do not communicate with them. They communicate directly with the Connection Servers. Persistent or non-persistent desktops In this section, we are going to talk about the different types of desktop assignments, and the way a virtual desktop machine is delivered to an end user. This is an important design consideration as the chosen method could potentially impact on the storage requirements (covered in the next section), the hosting infrastructure, and also which technology or solution is used to provision the desktop to the end users. One of the questions that always get asked is whether you should deploy a dedicated (persistent) assignment, or a floating desktop assignment (non-persistent). Desktops can either be individual virtual machines, which are dedicated to a user on a 1:1 basis (as we have in a physical desktop deployment, where each user effectively owns their own desktop), or a user has a new, vanilla desktop that gets provisioned, built, personalized, and then assigned at the time of login. The virtual desktop machine is chosen at random from a pool of available desktops that the end user is entitled to use. The two options are described in more detail as follows: Persistent desktop: Users are allocated a desktop that retains all of their documents, applications, and settings between sessions. The desktop is statically assigned the first time that the user connects and is then used for all subsequent sessions. No other user is permitted access to the desktop. Non-persistent desktop: Users might be connected to different desktops from the pool, each time that they connect. Environmental or user data does not persist between sessions and instead is delivered as the user logs on to their desktop. The desktop is refreshed or reset when the user logs off. In most use cases, a non-persistent configuration is the best option, the key reason is that, in this model, you don't need to build all the desktops upfront for each user. You only need to power on a virtual desktop as and when it's required. All users start with the same basic desktop, which then gets personalized before delivery. This helps with concurrency rates. For example, you might have 5,000 people in your organization, but only 2,000 ever login at the same time; therefore, you only need to have 2,000 virtual desktops available. Otherwise, you would have to build a desktop for each one of the 5,000 users that might ever log in, resulting in more server infrastructure and certainly a lot more storage capacity. We will talk about storage in the next section. The one thing that used to be a bit of a show-stopper for non-persistent desktops was around how to deliver the applications to the virtual desktop machine. Now application layering solutions such as VMware App Volumes is becoming a more main stream technology, the applications can now be delivered on demand as the desktop is built and the user logs in. Another thing that we often see some confusion over is the difference between dedicated and floating desktops, and how linked clones fit in. Just to make it clear, linked clones, full clones, and Instant Clones are not what we are talking about when we refer to dedicated and floating desktops. Cloning operations refer to how a desktop is built and provisioned, whereas the terms persistent and non-persistent refer to how a desktop is assigned to an end user. Dedicated and floating desktops are purely about user assignment and whether a user has a dedicated desktop or one allocated from a pool on-demand. Linked clones and full clones are features of Horizon View, which uses View Composer to create the desktop image for each user from a master or parent image. This means, regardless of having a floating or dedicated desktop assignment, the virtual desktop machine could still be a linked or full clone. So, here's a summary of the benefits: It is operationally efficient: All users start from a single or smaller number of desktop images. Organizations reduce the amount of image and patch management. It is efficient storage-wise: The amount of storage required to host the non-persistent desktop images will be smaller than keeping separate instances of unique user desktop images. In the next sections, we are going to cover an in-depth overview of the cloning technologies available in Horizon 7, starting with Horizon View Composer and linked clones, and the advantages the technology delivers. Summary In this article, we dscussed the Horizon View architecture and the different components that make up the complete solution. We covered the key technologies, such as how linked clones and Instant Clones work to optimize storage, and then introduced some of the features that go toward delivering a great end user experience, such as delivering high-end graphics, unified communications, profile management, and how the protocols deliver the desktop to the end user. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [article] Upgrading VMware Virtual Infrastructure Setups [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 11764

article-image-integrating-angular-2-react
Mary Gualtieri
01 Sep 2016
5 min read
Save for later

How to integrate Angular 2 with React

Mary Gualtieri
01 Sep 2016
5 min read
It can be overwhelming to choose which framework is the best framework to use to get the job done in JavaScript, because you have so many options out there. In previous years, we have seen two popular frameworks come to fame: React and Angular. React has gained a lot of popularity because it is what Facebook and Instagram are built on. So, which one do you use? If you ask most JavaScript developers, they advise you to use one or the other, and that comes down to personal choice. Let's go ahead and explore Angular and React, and actually explore how the two can work together A common misconception about React is that React is a full JavaScript framework in competition with Angular. But it actually is not. React is a user interface library and is just the view in an 'MVC' framework, with a little bit of a controller in it. In other words, React is a template (an Angular term for view) where you can add some controller logic.It is the same idea of integrating the jQuery UI into JavaScript. React aids in Angular, which is a frontend framework, and makes it more efficient for the user. This is because you can write a reusable component that can be plugged into an application. Angular has many pros, and that's what makes it so appealing to a lot of developers and companies. With my personal experience, Angular has been very powerful in making solid applications. However, one of the cons that bothers me about Angular is how it goes about executing a template. I always want to practice writing DRY code and that can be a problem with Angular. You can end up writing a bunch of HTML that can be complicated and difficult to read. But you can also end up causing a spaghetti effect in your CSS. One of Angular's big strengths is the watchers and the binders. When executed correctly in well-thought-out places, it can be great for fast binding and good responsiveness. But like every human, we all make errors, and when misused, you can have performance issues when binding way too many elements in your HTML. This leads to a very slow and lagged application that no one wants to use. But there is a way to rectify this. You can use React to aid in Angular's downfalls.React was designed to work really well with other libraries and makes rendering views or templates much faster. The beauty of React is how it uses a more efficient algorithm on the virtual DOM. In plain terms, it allows you to change parts of the application that need to be updated without having to touch the rest of the application. You can send a command to update the user interface and React compares these changes to the existing DOM. Instead of theorizing on how Angular and React complement one another, let's see how it looks in code. First, let's create an HMTL file that has an Angular script and a React script. *Remember, your relative path may be different from the example.  Next, let's create a React component that renders a string that is inputted by the user.  What is happening here is that we are using React to render our model. We created a component that renders the props passed to it. Then we create an Angular directive and controller to start the app. The directive is calling the React component and telling it to render. But, let's take a look at another example that integrates Angular 2. We can demonstrate how a React component can be self-contained but still be injected into the Angular 2 world. Angular 2 has an optional hook, onInit(), that takes advantage of triggering the code to render a React component. As you can see, the host component has defined implementation for the onInit handler, where you can call the static initialize function. If you noticed, the initialize method is passing a title text that is passed down from the React component as a prop. *React Component Another item to consider that unites React and Angular is TypeScript. This is a big deal because we can manage both Angular code and React code in the same compilation step. When it comes down to it, TypeScript is what gets stripped down to regular JavaScript. One thing that you have to remember to do is tell the compiler that you are using JSX by specifying the JSX flag. To conclude, Angular will always remain a very popular framework for developers. For an application to render faster, React is a great way to render templates faster and create a more efficient user experience. React is a great compliment to Angular and will only enhance your application. About the author Mary Gualtieri is a full-stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solveproblems, all while constantly learning. She can be found on GitHub at MaryGualtieri.
Read more
  • 0
  • 2
  • 22094

article-image-building-gallery-application
Packt
19 Aug 2016
17 min read
Save for later

Building a Gallery Application

Packt
19 Aug 2016
17 min read
In this article by Michael Williams, author of the book Xamarin Blueprints, will walk you through native development with Xamarin by building an iOS and Android application that will read from your local gallery files and display them in a UITableView and ListView.  (For more resources related to this topic, see here.) Create an iOS project Let's begin our Xamarin journey; firstly we will start by setting up our iOS project in Xamarin Studio: Start by opening Xamarin Studio and creating a new iOS project. To do so, we simply select File | New | Solution and select an iOS Single View App; we must also give it a name and add in the bundle ID you want in order to run your application. It is recommended that for each project, a new bundle ID be created, along with a developer provisioning profile for each project. Now that we have created the iOS project, you will be taken to the following screen: Doesn't this look familiar? Yes, it is our AppDelegate file, notice the .cs on the end, because we are using C-sharp (C#), all our code files will have this extension (no more .h or .m files). Before we go any further, spend a few minutes moving around the IDE, expand the folders, and explore the project structure; it is very similar to an iOS project created in XCode. Create a UIViewController and UITableView Now that we have our new iOS project, we are going to start by creating a UIViewController. Right-click on the project file, select Add | New File, and select ViewController from the iOS menu selection in the left-hand box: You will notice three files generated, a .xib, a .cs and a .designer.cs file. We don't need to worry about the third file; this is automatically generated based upon the other two files: Right-click on the project item and select Reveal in Finder, This will bring up the finder where you will double-click on the GalleryCell.xib file; this will bring up the user-interface designer in XCode. You should see automated text inserted into the document to help you get started. Firstly, we must set our namespace accordingly, and import our libraries with using statements. In order to use the iOS user interface elements, we must import the UIKit and CoreGraphics libraries. Our class will inherit the UIViewController class in which we will override the ViewDidLoad function: namespace Gallery.iOS {     using System;     using System.Collections.Generic;       using CoreGraphics;     using UIKit;       public partial class MainController : UIViewController     {         private UITableView _tableView;           private TableSource _source;           private ImageHandler _imageHandler;           public MainController () : base ("MainController", null)         {             _source = new TableSource ();               _imageHandler = new ImageHandler ();             _imageHandler.AssetsLoaded += handleAssetsLoaded;         }           private void handleAssetsLoaded (object sender, EventArgs e)         {             _source.UpdateGalleryItems (_imageHandler.CreateGalleryItems());             _tableView.ReloadData ();         }           public override void ViewDidLoad ()         {             base.ViewDidLoad ();               var width = View.Bounds.Width;             var height = View.Bounds.Height;               tableView = new UITableView(new CGRect(0, 0, width, height));             tableView.AutoresizingMask = UIViewAutoresizing.All;             tableView.Source = _source;               Add (_tableView);         }     } }   Our first UI element created is a UITableView. This will be used to insert into the UIView of the UIViewController, and we also retrieve width and height values of the UIView to stretch the UITableView to fit the entire bounds of the UIViewController. We must also call Add to insert the UITableView into the UIView. In order to have the list filled with data, we need to create a UITableSource to contain the list of items to be displayed in the list. We will also need an object called GalleryModel; this will be the model of data to be displayed in each cell. Follow the previous process for adding in two new .cs files, one will be used to create our UITableSource class and the other for the GalleryModel class. In TableSource.cs, first we must import the Foundation library with the using statement: using Foundation; Now for the rest of our class. Remember, we have to override specific functions for our UITableSource to describe its behavior. It must also include a list for containing the item view-models that will be used for the data displayed in each cell: public class TableSource : UITableViewSource     {         protected List<GalleryItem> galleryItems;         protected string cellIdentifier = "GalleryCell";           public TableSource (string[] items)         {             galleryItems = new List<GalleryItem> ();         }     } We must override the NumberOfSections function; in our case, it will always be one because we are not having list sections: public override nint NumberOfSections (UITableView tableView)         {             return 1;         } To determine the number of list items, we return the count of the list: public override nint RowsInSection (UITableView tableview, nint section)         {             return galleryItems.Count;         } Then we must add the GetCell function, this will be used to get the UITableViewCell to render for a particular row. But before we do this, we need to create a custom UITableViewCell. Customizing a cells appearance We are now going to design our cells that will appear for every model found in the TableSource class. Add in a new .cs file for our custom UITableViewCell. We are not going to use a .xib and simply build the user interface directly in code using a single .cs file. Now for the implementation: public class GalleryCell: UITableViewCell      {         private UIImageView _imageView;           private UILabel _titleLabel;           private UILabel _dateLabel;           public GalleryCell (string cellId) : base (UITableViewCellStyle.Default, cellId)         {             SelectionStyle = UITableViewCellSelectionStyle.Gray;               _imageView = new UIImageView()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               _titleLabel = new UILabel ()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               _dateLabel = new UILabel ()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               ContentView.Add (imageView);             ContentView.Add (titleLabel);             ContentView.Add (dateLabel);         }     } Our constructor must call the base constructor, as we need to initialize each cell with a cell style and cell identifier. We then add in a UIImageView and two UILabels for each cell, one for the file name and one for the date. Finally, we add all three elements to the main content view of the cell. When we have our initializer, we add the following: public void UpdateCell (GalleryItem gallery)         {             _imageView.Image = UIImage.LoadFromData (NSData.FromArray (gallery.ImageData));             _titleLabel.Text = gallery.Title;             _dateLabel.Text = gallery.Date;         }           public override void LayoutSubviews ()         {             base.LayoutSubviews ();               ContentView.TranslatesAutoresizingMaskIntoConstraints = false;               // set layout constraints for main view             AddConstraints (NSLayoutConstraint.FromVisualFormat("V:|[imageView(100)]|", NSLayoutFormatOptions.DirectionLeftToRight, null, new NSDictionary("imageView", imageView)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("V:|[titleLabel]|", NSLayoutFormatOptions.DirectionLeftToRight, null, new NSDictionary("titleLabel", titleLabel)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("H:|-10-[imageView(100)]-10-[titleLabel]-10-|", NSLayoutFormatOptions.AlignAllTop, null, new NSDictionary ("imageView", imageView, "titleLabel", titleLabel)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("H:|-10-[imageView(100)]-10-[dateLabel]-10-|", NSLayoutFormatOptions.AlignAllTop, null, new NSDictionary ("imageView", imageView, "dateLabel", dateLabel)));         } Our first function, UpdateCell, simply adds the model data to the view, and our second function overrides the LayoutSubViews method of the UITableViewCell class (equivalent to the ViewDidLoad function of a UIViewController). Now that we have our cell design, let's create the properties required for the view model. We only want to store data in our GalleryItem model, meaning we want to store images as byte arrays. Let's create a property for the item model: namespace Gallery.iOS {     using System;       public class GalleryItem     {         public byte[] ImageData;           public string ImageUri;           public string Title;           public string Date;           public GalleryItem ()         {         }     } } Now back to our TableSource class. The next step is to implement the GetCell function: public override UITableViewCell GetCell (UITableView tableView, NSIndexPath indexPath)         {             var cell = (GalleryCell)tableView.DequeueReusableCell (CellIdentifier);             var galleryItem = galleryItems[indexPath.Row];               if (cell == null)             {                 // we create a new cell if this row has not been created yet                 cell = new GalleryCell (CellIdentifier);             }               cell.UpdateCell (galleryItem);               return cell;         } Notice the cell reuse on the if statement; you should be familiar with this type of approach, it is a common pattern for reusing cell views and is the same as the Objective-C implementation (this is a very basic cell reuse implementation). We also call the UpdateCell method to pass in the required GalleryItem data to show in the cell. Let's also set a constant height for all cells. Add the following to your TableSource class: public override nfloat GetHeightForRow (UITableView tableView, NSIndexPath indexPath)         {             return 100;         } So what is next? public override void ViewDidLoad () { .. table.Source = new TableSource(); .. } Let's stop development and have a look at what we have achieved so far. We have created our first UIViewController, UITableView, UITableViewSource, and UITableViewCell, and bound them all together. Fantastic! We now need to access the local storage of the phone to pull out the required gallery items. But before we do this, we are now going to create an Android project and replicate what we have done with iOS. Create an Android project Let's continue our Xamarin journey with Android. Our first step is to create new general Android app: The first screen you will land on is MainActivity. This is our starting activity, which will inflate the first user interface; take notice of the configuration attributes: [Activity (Label = "Gallery.Droid", MainLauncher = true, Icon = "@mipmap/icon")] The MainLauncher flag indicates the starting activity; one activity must have this flag set to true so the application knows what activity to load first. The icon property is used to set the application icon, and the Label property is used to set the text of the application, which appears in the top left of the navigation bar: namespace Gallery.Droid {     using Android.App;     using Android.Widget;     using Android.OS;       [Activity (Label = "Gallery.Droid", MainLauncher = true, Icon = "@mipmap/icon")]     public class MainActivity : Activity     {         int count = 1;           protected override void OnCreate (Bundle savedInstanceState)         {             base.OnCreate (savedInstanceState);               // Set our view from the "main" layout resource             SetContentView (Resource.Layout.Main);         }     } } The formula for our activities is the same as Java; we must override the OnCreate method for each activity where we will inflate the first XML interface Main.xml. Creating an XML interface and ListView Our starting point is the main.xml sheet; this is where we will be creating the ListView: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     android_orientation="vertical"     android_layout_width="fill_parent"     android_layout_height="fill_parent">     <ListView         android_id="@+id/listView"         android_layout_width="fill_parent"         android_layout_height="fill_parent"         android_layout_marginBottom="10dp"         android_layout_marginTop="5dp"         android_background="@android:color/transparent"         android_cacheColorHint="@android:color/transparent"         android_divider="#CCCCCC"         android_dividerHeight="1dp"         android_paddingLeft="2dp" /> </LinearLayout> The main.xml file should already be in resource | layout directory, so simply copy and paste the previous code into this file. Excellent! We now have our starting activity and interface, so now we have to create a ListAdapter for our ListView. An adapter works very much like a UITableSource, where we must override functions to determine cell data, row design, and the number of items in the list. Xamarin Studio also has an Android GUI designer. Right-click on the Android project and add in a new empty class file for our adapter class. Our class must inherit the BaseAdapter class, and we are going to override the following functions: public override long GetItemId(int position); public override View GetView(int position, View convertView, ViewGroup parent); Before we go any further, we need to create a model for the objects used to contain the data to be presented in each row. In our iOS project, we created a GalleryItem to hold the byte array of image data used to create each UIImage. We have two approaches here: we could create another object to do the same as the GalleryItem, or even better, why don't we reuse this object using a shared project? Shared projects We are going to delve into our first technique for sharing code between different platforms. This is what Xamarin tries to achieve with all of its development, and we want to reuse as much code as possible. The biggest disadvantage when developing Android and iOS applications in two different languages is that we can't reuse anything. Let's create our first shared project: Our shared project will be used to contain the GalleryItem model, so whatever code we include in this shared project can be accessed by both the iOS and Android projects: In the preceding screenshot, have a look at the Solution explorer, and notice how the shared project doesn't contain anything more than .cs code sheets. Shared projects do not have any references or components, just code that is shared by all platform projects. When our native projects reference these shared projects, any libraries being referenced via using statements come from the native projects. Now we must have the iOS and Android projects reference the shared project; right-click on the References folder and select Edit References: Select the shared project you just created and we can now reference the GalleryItem object from both projects. Summary In this article, we have seen a walkthrough of building a gallery application on both iOS and Android using native libraries. This will be done on Android using a ListView and ListAdapter. Resources for Article:   Further resources on this subject: Optimizing Games for Android [article] Getting started with Android Development [article] Creating User Interfaces [article]
Read more
  • 0
  • 0
  • 22224

article-image-c-sfml-visual-studio-and-starting-first-game
Packt
19 Aug 2016
15 min read
Save for later

C++, SFML, Visual Studio, and Starting the first game

Packt
19 Aug 2016
15 min read
In this article by John Horton, author of the book, Beginning C++ Game Programming, we will waste no time in getting you started on your journey to writing great games for the PC, using C++ and the OpenGL powered SFML.We will learn absolutely everything we need, to have the first part of our first game up and running. Here is what we will do now. (For more resources related to this topic, see here.) Find out about the games we will build Learn a bit about C++ Explore SFML and its relationship with C++ Look at the software, Visual Studio Setup a game development environment Create a reusable project template which will save a lot of time Plan and prepare for the first game project Timber!!! Write the first C++ code and make a runnable game that draws a background The games The journey will be mainly smooth as we will learn the fundamentals of the super-fast C++ language, a step at a time and then put the new knowledge to use, adding cool features to the three games we are building. Timber!!! The first game is an addictive, fast-paced clone of the hugely successful Timberman http://store.steampowered.com/app/398710/. Our game, Timber!!!, will allow us to be introduced to all the C++ basics at the same time as building a genuinely playable game. Here is what our version of the game will look like when we are done and we have added a few last-minute enhancements. The Walking Zed Next we will build a frantic, zombie survival-shooter, not unlike the Steam hit, Over 9000 Zombies http://store.steampowered.com/app/273500/. The player will have a machine gun, the ability to gather resources and build defenses. All this will take place in a randomly generated, scrolling world. To achieve this we will learn about object oriented programming and how it enables us to have a large code base (lots of code) that is easy to write and maintain. Expect exciting features like hundreds of enemies, rapid fire weaponry and directional sound. Thomas gets a real friend The third game will be a stylish and challenging, single player and coop, puzzle platformer. It is based on the very popular game, Thomas was Alone http://store.steampowered.com/app/220780/. Expect to learn cool topics like particle effects, OpenGL Shaders and multiplayer networking. If you want to play any of the games now, you can do so from the download bundle in the Runnable Games folder. Just double-click on the appropriate .exe file. Let's get started by introducing C++, Visual Studio and SFML! C++ One question you might have is, why use C++ at all? C++ is fast, very fast. What makes this so, is the fact that the code that we write is directly translated into machine executable instructions. These instructions together, are what makes the game. The executable game is contained within a .exe file which the player can simply double-click to run. There are a few steps in the process. First the pre-processor looks to see if any other code needs to be included within our own and adds it if necessary. Next, all the code is compiled into object files by the compiler program. Finally a third program called the linker, joins all the object files into the executable file which is our game. In addition, C++ is well established at the same time as being extremely up-to-date. C++ is an Object Oriented Programming (OOP) language which means we can write and organize our code in a proven way that makes our games efficient and manageable. Most of this other code that I refered to, you might be able to guess, is SFML, and we will find out more about SFML in just a minute. The pre-processor, compiler and linker programs I have just mentioned, are all part of the Visual Studio Integrated Development Environment (IDE). Microsoft Visual Studio Visual Studio hides away the complexity of the pre-processing, compiling and linking. It wraps it all up into the press of a button. In addition to this, it provides a slick user interface for us to type our code and manage, what will become a large selection of code files and other project assets as well. While there are advanced versions of Visual Studio that cost hundreds of dollars, we will be able to build all three of our games in the free Express 2015 for Desktop version. SFML Simple Fast Media Library (SFML) is not the only C++ library for games. It is possible to make an argument to use other libraries, but SFML seems to come through in first place, for me, every time. Firstly it is written using object oriented C++. Perhaps the biggest benefit is that all modern C++ programming uses OOP. Every C++ beginners guide I have ever read. uses and teaches OOP. OOP is the future (and the now) of coding in almost all languages in fact. So why, if your learning C++ from the beginning, would you want to do it any other way? SFML has a module (code) for just about anything you would ever want to do in a 2d game. SFML works using OpenGL that can also make 3d games. OpenGL is the de-facto free-to-use graphics library for games. When you use SFML, you are automatically using OpenGL. SFML drastically simplifies: 2d graphics and animation including scrolling game-worlds. Sound effects and music playback, including high quality directional sound. Online multiplayer features The same code can be compiled and linked on all major desktop operating systems, and soon mobile as well! Extensive research has not uncovered any more suitable way to build 2d games for PC, even for expert developers and especially if you are a beginner and want to learn C++ in a fun gaming environment. Setting up the development environment Now you know a bit more about how we will be making games, it is time to set up a development environment so we can get coding. What about Mac and Linux? The games that we make can be built to run on Windows, Mac and Linux! The code we use will be identical for each each. However, each version does need to be compiled and linked on the platform for which it is intended and Visual Studio will not be able to help us with Mac and Linux. Although, I guess, if you are an enthusiastic Mac or Linux user and you are comfortable with your operating system, the vast majority of the challenge you will encounter, will be in the initial setup of the development environment, SFML and the first project. For Linux, read this for an overview: http://www.sfml-dev.org/tutorials/2.0/start-linux.php For Linux, read this for step-by-step: http://en.sfml-dev.org/forums/index.php?topic=9808.0 On Mac, read this tutorial as well as the linked out articles: http://www.edparrish.net/common/sfml-osx.html Installing Visual Studio Express 2015 for Desktop Installing Visual Studio can be almost as simple as downloading a file and clicking a few buttons. It will help us, however, if we carefully run through exactly how we do this. For this reason I will walk through the installation process a step at a time. The Microsoft Visual Studio site says, you need 5 GB of hard disk space. From experience, however, I would suggest you need at least 10 GB of free space. In addition, these figures are slightly ambiguous. If you are planning to install on a secondary hard drive, you will still need at least 5 GB on the primary hard drive because no matter where you choose to install Visual Studio, it will need this space too. To summarize this ambiguous situation: It is essential to have a full 10 GB space on the primary hard disk, if you will be installing Visual Studio to that primary hard disk. On the other hand, make sure you have 5 GB on the primary hard disk as well as 10 GB on the secondary, if you intend to install to a secondary hard disk. Yep, stupid, I know! The first thing you need is a Microsoft account and the login details. If you have a Hotmail or MSN email address then you already have one. If not, you can sign up for a free one here:https://login.live.com/. Visit this link: https://www.visualstudio.com/en-us/downloads/download-visual-studio-vs.aspx. Click on Visual Studio 2015, then Express 2015 for desktop then the Download button. This next image shows the three places to click. Wait for the short download to complete and then run the downloaded file. Now you just need to follow the on-screen instructions. However, make a note of the folder where you choose to install Visual Studio. If you want to do things exactly the same as me, then create a new folder called Visual Studio 2015 on your preferred hard disk and install to this folder. This whole process could take a while depending on the speed of your Internet connection. When you see the next screen, click on Launch and enter your Microsoft account login details. Now we can turn to SFML. Setting up SFML This short tutorial will step through downloading the SFML files, that allows us to include the functionality contained in the library as well as the files, called DLL files, that will enable us to link our compiled object code to the SFML compiled object code. Visit this link on the SFML website: http://www.sfml-dev.org/download.php. Click on the button that says Latest Stable Version as shown next. By the time you read this guide, the actual latest version will almost certainly have changed. That doesn't matter as long as you do the next step just right. We want to download the 32 bit version for Visual C++ 2014. This might sound counter-intuitive because we have just installed Visual Studio 2015 and you probably(most commonly) have a 64 bit PC. The reason we choose the download that we do, is because Visual C++ 2014 is part of Visual Studio 2015 (Visual Studio does more than C++) and we will be building games in 32 bit so they run on both 32 and 64 bit machines. To be clear click the download indicated below. When the download completes, create a folder at the root of the same drive where you installed Visual Studio and name it SFML. Also create another folder at the root of the drive where you installed Visual Studio and call it Visual Studio Stuff. We will store all kinds of Visual Studio related things here so Stuff seems like a good name. Just to be clear, here is what my hard drive looks like after this step Obviously, the folders you have in between the highlighted three folders in the image will probably be totally different to mine. Now, ready for all the projects we will soon be making, create a new folder inside Visual Studio Stuff. Name the new folder Projects. Finally, unzip the SFML download. Do this on your desktop. When unzipping is complete you can delete the zip folder. You will be left with a single folder on your desktop. Its name will reflect the version of SFML that you downloaded. Mine is called SFML-2.3.2-windows-vc14-32-bit. Your file name will likely reflect a more recent version. Double click this folder to see the contents, then double click again into the next folder (mine is called SFML-2.3.2). The image below is what my SFML-2.3.2 folder contents looks like, when the entire contents has been selected. Yours should look the same. Copy the entire contents of this folder, as seen in the previous image and paste/drag all the contents into the SFML folder you created in step 3. I will refer to this folder simply as your SFML folder. Now we are ready to start using C++ and SFML in Visual Studio. Creating a reusable project template As setting up a project is a fairly fiddly process, we will create a project and then save it as a Visual Studio template. This will save us a quite significant amount of work each time we start a new game. So if you find the next tutorial a little tedious, rest assured that you will never need to do this again. In the New Project window, click the little drop-down arrow next to Visual C++ to reveal more options, then click Win32 and then click Win32 Console Application, You can see all these selections in the next screen-shot. Now, at the bottom of the New Project window type HelloSFML in the Name: field. Next, browse to the Visual Studio StuffProjects folder that we created in the previous tutorial. This will be the location that all our project files will be kept. All templates are based on an actual project. So we will have a project called HelloSFML but the only thing we will do with it, is make a template from it. When you have completed the steps above click OK. The next image shows the Application Settings window. Check the box for Console application, and leave the other options as shown below. Click Finish and Visual Studio will create the new project. Next we will add some fairly intricate and important project settings. This is the laborious part, but as we will create a template, we will only need to do this once. What we need to do is to tell Visual Studio, or more specifically the code compiler, that is part of Visual Studio, where to find a special type of code file from SFML. The special type of file I am referring to is a header file. Header files are the files that define the format of the SFML code. So when we use the SFML code, the compiler knows how to handle it. Note that the header files are distinct from the main source code files and they are contained in files with the .hpp file extension. All this will become clearer when we eventually start adding our own header files in the second project. In addition, we need to tell Visual Studio where it can find the SFML library files. From the Visual Studio main menu select Project | HelloSFML properties. In the resulting HelloSFML Property Pages window, take the following steps, which are numbered and can be referred to in the next image. First, select All Configurations from the Configuration: drop-down. Second, select C/C++ then General from the left-hand menu. Third, locate the Additional Include Directories edit box and type the drive letter where your SFML folder is located, followed by SFMLinclude. The full path to type, if you located your SFML folder on your D drive is, as shown in the screen-shot, D:SFMLinclude. Vary your path if you installed SFML to a different drive. Click Apply to save your configurations so far. Now, still in the same window, perform these next steps which again refer to the next image. Select Linker then General. Find the Additional Library Directories edit box and type the drive letter where your SFML folder is, followed by SFMLlib. So the full path to type if you located your SFML folder on your D drive is, as shown in the screen-shot, D:SFMLlib. Vary your path if you installed SFML to a different drive. Click Apply to save your configurations so far. Finally for this stage, still in the same window, perform these steps which again refer to the next image. Switch the Configuration: drop down(1) to Debug as we will be running and testing our games in debug mode. Select Linker then Input (2). Find the Additional Dependencies edit box (3) and click into it at the far left hand side. Now copy & paste/type the following at the indicated place.: sfml-graphics-d.lib;sfml-window-d.lib;sfml-system-d.lib;sfml-network-d.lib;sfml-audio-d.lib; Again be REALLY careful to place the cursor exactly and not to overwrite any of the text that is already there. Click OK. Let's make the template of our HelloSFML project so we never have to do this slightly mind-numbing task again. Creating a reusable project template is really easy. In Visual Studio select File | Export Template…. Then in the Export Template Wizard window make sure the Project template option is selected and the HelloSFML project is selected for the From which project do you want to create a template option. Click Next and then Finish. Phew, that's it! Next time we create a project I'll show you how to do it from this template. Let's build Timber!!! Summary In this article we learnt that, it is true that configuring an IDE, to use a C++ library can be a bit awkward and long. Also the concept of classes and objects is well known to be slightly awkward for people new to coding. Resources for Article: Further resources on this subject: Game Development Using C++ [Article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] Introducing the Boost C++ Libraries [Article]
Read more
  • 0
  • 0
  • 23838

article-image-aspnet-controllers-and-server-side-routes
Packt
19 Aug 2016
22 min read
Save for later

ASP.NET Controllers and Server-Side Routes

Packt
19 Aug 2016
22 min read
In this article by Valerio De Sanctis, author of the book ASP.NET Web API and Angular 2, we will explore the client-server interaction capabilities of our frameworks: to put it in other words, we need to understand how Angular2 will be able to fetch data from ASP.NET Core using its brand new, MVC6-based API structure. We won't be worrying about how will ASP.NET core retrieve these data – be it from session objects, data stores, DBMS, or any possible data source, that will come later on. For now, we'll just put together some sample, static data in order to understand how to pass them back and forth by using a well-structured, highly-configurable and viable interface. (For more resources related to this topic, see here.) The data flow A Native Web App following the single-page application approach will roughly handle the client-server communication in the following way: In case you are wondering about what these Async Data Requests actually are, the answer is simple, everything, as long as it needs to retrieve data from the server, which is something that most of the common user interactions will normally do, including (yet not limiting to), pressing a button to show more data or to edit/delete something, following a link to another app view, submitting a form and so on. That is, unless the task is so trivial or it involves a minimal amount of data that the client can entirely handle it, meaning that it already has everything he needs. Examples of such tasks are, show/hide element toggles, in-page navigation elements (such as internal anchors), and any temporary job requiring to hit a confirmation or save button to be pressed before being actually processed. The above picture shows, in a nutshell, what we're going to do. Define and implement a pattern to serve these JSON-based, server-side responses our application will need to handle the upcoming requests. Since we've chosen a strongly data-driven application pattern such as a Wiki, we'll surely need to put together a bunch of common CRUD based requests revolving around a defined object which will represent our entries. For the sake of simplicity, we'll call it Item from now on. These requests will address some common CMS-inspired tasks such as: display a list of items, view/edit the selected item's details, handle filters, and text-based search queries and also delete an item. Before going further, let's have a more detailed look on what happens between any of these Data Request issued by the client and JSON Responses send out by the server, i.e. what's usually called the Request/Response flow: As we can see, in order to respond to any client-issued Async Data Request we need to build a server-side MVC6 WebAPIControllerfeaturing the following capabilities: Read and/or Write data using the Data Access Layer. Organize these data in a suitable, JSON-serializableViewModel. Serialize the ViewModel and send it to the client as a JSON Response. Based on these points, we could easily conclude that the ViewModel is the key item here. That's not always correct: it could or couldn't be the case, depending on the project we are building. To better clarify that, before going further, it could be useful to spend a couple words on the ViewModel object itself. The role of the ViewModel We all know that a ViewModel is a container-type class which represents only the data we want to display on our webpage. In any standard MVC-based ASP.NET application, the ViewModel is instantiated by the Controller in response to a GET request using the data fetched from the Model: once built, the ViewModel is passed to the View, where it is used to populate the page contents/input fields. The main reason for building a ViewModel instead of directly passing the Model entities is that it only represents the data that we want to use, and nothing else. All the unnecessary properties that are in the model domain object will be left out, keeping the data transfer as lightweight as possible. Another advantage is the additional security it gives, since we can protect any field from being serialized and passed through the HTTP channel. In a standard Web API context, where the data is passed using RESTFul conventions via serialized formats such as JSON or XML, the ViewModel could be easily replaced by a JSON-serializable dynamic object created on the fly, such as this: var response = new{ Id = "1", Title = "The title", Description = "The description" }; This approach is often viable for small or sample projects, where creating one (or many) ViewModel classes could be a waste of time. That's not our case, though, conversely, our project will greatly benefit from having a well-defined, strongly-typed ViewModel structure, even if they will be all eventually converted into JSON strings. Our first controller Now that we have a clear vision of the Request/Response flow and its main actors, we can start building something up. Let's start with the Welcome View, which is the first page that any user will see upon connecting to our native web App. This is something that in a standard web application would be called Home Page, but since we are following a Single Page Application approach that name isn't appropriate. After all, we are not going to have more than one page. In most Wikis, the Welcome View/Home Page contains a brief text explaining the context/topic of the project and then one or more lists of items ordered and/or filtered in various ways, such as: The last inserted ones (most recent first). The most relevant/visited ones (most viewed first). Some random items (in random order). Let's try to do something like that. This will be our master plan for a suitable Welcome View: In order to do that, we're going to need the following set of API calls: api/items/GetLatest (to fetch the last inserted items). api/items/GetMostViewed (to fetch the last inserted items). api/items/GetRandom (to fetch the last inserted items). As we can see, all of them will be returning a list of items ordered by a well-defined logic. That's why, before working on them, we should provide ourselves with a suitable ViewModel. The ItemViewModel One of the biggest advantages in building a Native Web App using ASP.NET and Angular2 is that we can start writing our code without worrying to much about data sources: they will come later, and only after we're sure about what we really need. This is not a requirement either - you are also free to start with your data source for a number of good reasons, such as: You already have a clear idea of what you'll need. You already have your entity set(s) and/or a defined/populated data structure to work with. You're used to start with the data, then moving to the GUI. All the above reasons are perfectly fine: you won't ever get fired for doing that. Yet, the chance to start with the front-end might help you a lot if you're still unsure about how your application will look like, either in terms of GUI and/or data. In building this Native Web App, we'll take advantage of that: hence why we'll start defining our Item ViewModelinstead of creating its Data Source and Entity class. From Solution Explorer, right-click to the project root node and add a new folder named ViewModels. Once created, right-click on it and add a new item: from the server-side elements, pick a standard Class, name it ItemViewModel.cs and hit the Add button, then type in the following code: using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Threading.Tasks; using Newtonsoft.Json; namespaceOpenGameListWebApp.ViewModels { [JsonObject(MemberSerialization.OptOut)] publicclassItemViewModel { #region Constructor public ItemViewModel() { } #endregion Constructor #region Properties publicint Id { get; set; } publicstring Title { get; set; } publicstring Description { get; set; } publicstring Text { get; set; } publicstring Notes { get; set; } [DefaultValue(0)] publicint Type { get; set; } [DefaultValue(0)] publicint Flags { get; set; } publicstring UserId { get; set; } [JsonIgnore] publicintViewCount { get; set; } publicDateTime CreatedDate { get; set; } publicDateTime LastModifiedDate { get; set; } #endregion Properties } } As we can see, we're defining a rather complex class: this isn't something we could easily handle using dynamic object created on-the-fly, hence why we're using a ViewModel instead. We will be installing Newtonsoft's Json.NET Package using NuGet. We will start using it in this class, by including its namespace in line 6 and decorating our newly-created Item class with a JsonObject Attribute in line 10. That attribute can be used to set a list of behaviours of the JsonSerializer / JsonDeserializer methods, overriding the default ones: notice that we used MemberSerialization.OptOut, meaning that any field will be serialized into JSON unless being decorated by an explicit JsonIgnore attribute or NonSerializedattribute. We are making this choice because we're going to need most of our ViewModel properties serialized, as we'll be seeing soon enough. The ItemController Now that we have our ItemViewModel class, let's use it to return some server-side data. From your project's root node, open the /Controllers/ folder: right-click on it, select Add>New Item, then create a Web API Controller class, name it ItemController.cs and click the Add button to create it. The controller will be created with a bunch of sample methods: they are identical to those present in the default ValueController.cs, hence we don't need to keep them. Delete the entire file content and replace it with the following code: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; usingOpenGameListWebApp.ViewModels; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { // GET api/items/GetLatest/5 [HttpGet("GetLatest/{num}")] publicJsonResult GetLatest(int num) { var arr = newList<ItemViewModel>(); for (int i = 1; i <= num; i++) arr.Add(newItemViewModel() { Id = i, Title = String.Format("Item {0} Title", i), Description = String.Format("Item {0} Description", i) }); var settings = newJsonSerializerSettings() { Formatting = Formatting.Indented }; returnnewJsonResult(arr, settings); } } } This controller will be in charge of all Item-related operations within our app. As we can see, we started defining a GetLatestmethod accepting a single Integerparameter value.The method accepts any GET request using the custom routing rules configured via the HttpGetAttribute: this approach is called Attribute Routing and we'll be digging more into it later in this article. For now, let's stick to the code inside the method itself. The behaviour is really simple: since we don't (yet) have a Data Source, we're basically mocking a bunch of ItemViewModel objects: notice that, although it's just a fake response, we're doing it in a structured and credible way, respecting the number of items issued by the request and also providing different content for each one of them. It's also worth noticing that we're using a JsonResult return type, which is the best thing we can do as long as we're working with ViewModel classes featuring the JsonObject attribute provided by the Json.NET framework: that's definitely better than returning plain string or IEnumerable<string> types, as it will automatically take care of serializing the outcome and setting the appropriate response headers.Let's try our Controller by running our app in Debug Mode: select Debug>Start Debugging from main menu or press F5. The default browser should open, pointing to the index.html page because we did set it as the Launch URL in our project's debug properties. In order to test our brand new API Controller, we need to manually change the URL with the following: /api/items/GetLatest/5 If we did everything correctly, it will show something like the following: Our first controller is up and running. As you can see, the ViewCount property is not present in the Json-serialized output: that's by design, since it has been flagged with the JsonIgnore attribute, meaning that we're explicitly opting it out. Now that we've seen that it works, we can come back to the routing aspect of what we just did: since it is a major topic, it's well worth some of our time. Understanding routes We will acknowledge the fact that the ASP.NET Core pipeline has been completely rewritten in order to merge the MVC and WebAPI modules into a single, lightweight framework to handle both worlds. Although this certainly is a good thing, it comes with the usual downside that we need to learn a lot of new stuff. Handling Routes is a perfect example of this, as the new approach defines some major breaking changes from the past. Defining routing The first thing we should do is giving out a proper definition of what Routing actually is. To cut it simple, we could say that URL routing is the server-side feature that allows a web developer to handle HTTP requests pointing to URIs not mapping to physical files. Such technique could be used for a number of different reasons, including: Giving dynamic pages semantic, meaningful and human-readable names in order to advantage readability and/or search-engine optimization (SEO). Renaming or moving one or more physical files within your project's folder tree without being forced to change their URLs. Setup alias and redirects. Routing through the ages In earlier times, when ASP.NET was just Web Forms, URL routing was strictly bound to physical files: in order to implement viable URL convention patterns the developers were forced to install/configure a dedicated URL rewriting tool by using either an external ISAPI filter such as Helicontech's SAPI Rewrite or, starting with IIS7, the IIS URL Rewrite Module. When ASP.NET MVC got released, the Routing pattern was completely rewritten: the developers could setup their own convention-based routes in a dedicated file (RouteConfig.cs, Global.asax, depending on template) using the Routes.MapRoute method. If you've played along with MVC 1 through 5 or WebAPI 1 and/or 2, snippets like this should be quite familiar to you: routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); This method of defining routes, strictly based upon pattern matching techniques used to relate any given URL requests to a specific Controller Actions, went by the name of Convention-based Routing. ASP.NET MVC5 brought something new, as it was the first version supporting the so-called Attribute-based Routing. This approach was designed as an effort to give to developers a more versatile approach. If you used it at least once you'll probably agree that it was a great addition to the framework, as it allowed the developers to define routes within the Controller file. Even those who chose to keep the convention-based approach could find it useful for one-time overrides like the following, without having to sort it out using some regular expressions: [RoutePrefix("v2Products")] publicclassProductsController : Controller { [Route("v2Index")] publicActionResult Index() { return View(); } } In ASP.NET MVC6, the routing pipeline has been rewritten completely: that's way things like the Routes.MapRoute() method is not used anymore, as well as any explicit default routing configuration. You won't be finding anything like that in the new Startup.cs file, which contains a very small amount of code and (apparently) nothing about routes. Handling routes in ASP.NET MVC6 We could say that the reason behind the Routes.MapRoute method disappearance in the Application's main configuration file is due to the fact that there's no need to setup default routes anymore. Routing is handled by the two brand-new services.AddMvc() and services.UseMvc() methods called within the Startup.cs file, which respectively register MVC using the Dependency Injection framework built into ASP.NET Core and add a set of default routes to our app. We can take a look at what happens behind the hood by looking at the current implementation of the services.UseMvc()method in the framework code (relevant lines are highlighted): public static IApplicationBuilder UseMvc( [NotNull] this IApplicationBuilder app, [NotNull] Action<IRouteBuilder> configureRoutes) { // Verify if AddMvc was done before calling UseMvc // We use the MvcMarkerService to make sure if all the services were added. MvcServicesHelper.ThrowIfMvcNotRegistered(app.ApplicationServices); var routes = new RouteBuilder { DefaultHandler = new MvcRouteHandler(), ServiceProvider = app.ApplicationServices }; configureRoutes(routes); // Adding the attribute route comes after running the user-code because // we want to respect any changes to the DefaultHandler. routes.Routes.Insert(0, AttributeRouting.CreateAttributeMegaRoute( routes.DefaultHandler, app.ApplicationServices)); return app.UseRouter(routes.Build()); } The good thing about this is the fact that the framework now handles all the hard work, iterating through all the Controller's actions and setting up their default routes, thus saving us some work. It worth to notice that the default ruleset follows the standard RESTFulconventions, meaning that it will be restricted to the following action names:Get, Post, Put, Delete. We could say here that ASP.NET MVC6 is enforcing a strict WebAPI-oriented approach - which is much to be expected, since it incorporates the whole ASP.NET Core framework. Following the RESTful convention is generally a great thing to do, especially if we aim to create a set of pragmatic, RESTful basedpublic API to be used by other developers. Conversely, if we're developing our own app and we want to keep our API accessible to our eyes only, going for custom routing standards is just as viable: as a matter of fact, it could even be a better choice to shield our Controllers against some most trivial forms of request flood and/or DDoS-based attacks. Luckily enough, both the Convention-based Routing and the Attribute-based Routing are still alive and well, allowing you to setup your own standards. Convention-based routing If we feel like using the most classic routing approach, we can easily resurrect our beloved MapRoute() method by enhancing the app.UseMvc() call within the Startup.cs file in the following way: app.UseMvc(routes => { // Route Sample A routes.MapRoute( name: "RouteSampleA", template: "MyOwnGet", defaults: new { controller = "Items", action = "Get" } ); // Route Sample B routes.MapRoute( name: "RouteSampleB", template: "MyOwnPost", defaults: new { controller = "Items", action = "Post" } ); }); Attribute-based routing Our previously-shownItemController.cs makes a good use of the Attribute-Based Routing approach, featuring it either at Controller level: [Route("api/[controller]")] public class ItemsController : Controller Also at Action Method level: [HttpGet("GetLatest")] public JsonResult GetLatest() Three choices to route them all Long story short, ASP.NET MVC6 is giving us three different choices for handling routes: enforcing the standard RESTful conventions, reverting back to the good old Convention-based Routing or decorating the Controller files with the Attribute-based Routing. It's also worth noticing that Attribute-based Routes, if and when defined, would override any matchingConvention-basedpattern: both of them, if/when defined, would override the default RESTful conventions created by the built-in UseMvc() method. In this article we're going to use all of these approaches, in order to learn when, where and how to properly make use of either of them. Adding more routes Let's get back to our ItemController. Now that we're aware of the routing patterns we can use, we can use that knowledge to implement the API calls we're still missing. Open the ItemController.cs file and add the following code (new lines are highlighted): using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using OpenGameListWebApp.ViewModels; using Newtonsoft.Json; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { #region Attribute-based Routing ///<summary> /// GET: api/items/GetLatest/{n} /// ROUTING TYPE: attribute-based ///</summary> ///<returns>An array of {n} Json-serialized objects representing the last inserted items.</returns> [HttpGet("GetLatest/{n}")] publicIActionResult GetLatest(int n) { var items = GetSampleItems().OrderByDescending(i => i.CreatedDate).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetMostViewed/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing the items with most user views.</returns> [HttpGet("GetMostViewed/{n}")] public IActionResult GetMostViewed(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderByDescending(i => i.ViewCount).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetRandom/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing some randomly-picked items.</returns> [HttpGet("GetRandom/{n}")] public IActionResult GetRandom(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderBy(i => Guid.NewGuid()).Take(n); return new JsonResult(items, DefaultJsonSettings); } #endregion #region Private Members /// <summary> /// Generate a sample array of source Items to emulate a database (for testing purposes only). /// </summary> /// <param name="num">The number of items to generate: default is 999</param> /// <returns>a defined number of mock items (for testing purpose only)</returns> private List<ItemViewModel> GetSampleItems(int num = 999) { List<ItemViewModel> lst = new List<ItemViewModel>(); DateTime date = new DateTime(2015, 12, 31).AddDays(-num); for (int id = 1; id <= num; id++) { lst.Add(new ItemViewModel() { Id = id, Title = String.Format("Item {0} Title", id), Description = String.Format("This is a sample description for item {0}: Lorem ipsum dolor sit amet.", id), CreatedDate = date.AddDays(id), LastModifiedDate = date.AddDays(id), ViewCount = num - id }); } return lst; } /// <summary> /// Returns a suitable JsonSerializerSettings object that can be used to generate the JsonResult return value for this Controller's methods. /// </summary> private JsonSerializerSettings DefaultJsonSettings { get { return new JsonSerializerSettings() { Formatting = Formatting.Indented }; } } #endregion } We added a lot of things there, that's for sure. Let's see what's new: We added the GetMostViewed(n) and GetRandom(n) methods, built upon the same mocking logic used for GetLatest(n): either one requires a single parameter of Integer type to specify the (maximum) number of items to retrieve. We added two new private members: The GetLatestItems() method, to generate some sample Item objects when we need them. This method is an improved version of the dummy item generator loop we had inside the previous GetLatest() method implementation, as it acts more like a Dummy Data Provider: we'll tell more about it later on. The DefaultJsonSettings property, so we won't have to manually instantiate a JsonSerializerSetting object every time. We also decorated each class member with a dedicated<summary> documentation tag explaining what it does and its return value. These tags will be used by IntelliSense to show real-time information about the type within the Visual Studio GUI. They will also come handy when we'll want to generate an auto-generated XML Documentationfor our project by using industry-standard documentation tools such as Sandcastle. Finally, we added some #region / #endregion pre-processor directives to separate our code into blocks. We'll do this a lot from now on, as this will greatly increase our source code readability and usability, allowing us to expand or collapse different sections/part when we don't need them, thus focusing more on what we're working on. For more info regarding documentation tags, take a look at the following MSDN official documentation page: https://msdn.microsoft.com/library/2d6dt3kf.aspx If you want know more about C# pre-processor directives, this is the one to check out instead: https://msdn.microsoft.com/library/9a1ybwek.aspx The dummy data provider Our new GetLatestItems() method deserves a couple more words. As we can easily see it emulates the role of a Data Provider, returning a list of items in a credible fashion. Notice that we built it in a way that it will always return identical items, as long as the num parameter value remains the same: The generated items Id will follow a linear sequence, from 1 to num. Any generated item will have incremental CreatedDate and LastModifiedDate values based upon their Id: the higher the Id, the most recent the two dates will be, up to 31 December 2015. This follows the assumption that most recent items will have higher Id, as it normally is for DBMS records featuring numeric, auto-incremental keys. Any generated item will have a decreasing ViewCount value based upon their Id: the higher the Idis, the least it will be. This follows the assumption that newer items will generally get less views than older ones. While it obviously lacksany insert/update/delete feature, this Dummy Data Provideris viable enough to serve our purposes until we'll replace it with an actual, persistence-based Data Source. Technically speaking, we could do something better than we did by using one of the many Mocking Framework available through NuGet:Moq, NMock3,NSubstitute orRhino, just to name a few. Summary We spent some time into putting the standard application data flow under our lens: a two-way communication pattern between the server and their clients, built upon the HTTP protocol. We acknowledged the fact that we'll be mostly dealing with Json-serializable object such as Items, so we chose to equip ourselves with an ItemViewModel server-side class, together with an ItemController that will actively use it to expose the data to the client. We started building our MVC6-based WebAPI interface by implementing a number of methods required to create the client-side UI we chose for our Welcome View, consisting of three item listings to show to our users: last inserted ones, most viewed ones and some random picks. We routed the requests to them by using a custom set of Attribute-based routing rules, which seemed to be the best choice for our specific scenario. While we were there, we also took the chance to add a dedicated method to retrieve a single Item from its unique Id, assuming we're going to need it for sure. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] ASP.Net Site Performance: Improving JavaScript Loading [article] Displaying MySQL data on an ASP.NET Web Page [article]
Read more
  • 0
  • 0
  • 7290
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-running-your-applications-aws-part-2
Cheryl Adams
19 Aug 2016
6 min read
Save for later

Running Your Applications with AWS - Part 2

Cheryl Adams
19 Aug 2016
6 min read
An active account with AWS means you are on your way with building in the cloud.  Before you start building, you need to tackle the Billing and Cost Management, under Account. It is likely that you are starting with a Free-Tier, so it is important to know that you still have the option of paying for additional services. Also, if you decide to continue with AWS,you should get familiar with this page.  This is not your average bill or invoice page—it is much more than that. The Billing & Cost Management Dashboard is a bird’s-eye view of all of your account activity. Once you start accumulating pay-as-you-go services, this page will give you a quick review of your monthly spending based on services. Part of managing your cloud services includes billing, so it is a good idea to become familiar with this from the start. Amazon also gives you the option of setting up cost-based alerts for your system, which is essential if youwant to be alerted by any excessive cost related to your cloud services. Budgets allow you to receive e-mailed notifications or alerts if spending exceeds the budget that you have created.    If you want to dig in even deeper, try turning on the Cost Explorer for an analysis of your spending. The Billing and Cost Management section of your account is much more than just invoices. It is the AWS complete cost management system for your cloud. Being familiar with all aspects of the cost management system will help you to monitor your cloud services, and hopefully avoid any expenses that may exceed your budget. In our previous discussion, we considered all AWSservices.  Let’s take another look at the details of the services. Amazon Web Services Based on this illustration, you can see that the build options are grouped by words such asCompute, Storage & Content Delivery and  Databases.  Each of these objects or services lists a step-by-step routine that is easy to follow. Within the AWS site, there are numerous tutorials with detailed build instructions. If you are still exploring in the free-tier, AWS also has an active online community of users whotry to answer most questions. Let’s look at the build process for Amazon’s EC2 Virtual Server. The first thing that you will notice is that Amazon provides 22 different Amazon Machine Images (AMIs) to choose from (at the time this post was written).At the top of the screen is a Step process that will guide you through the build. It should be noted that some of the images available are not defined as a part of the free-tier plan. The remaining images that do fit into the plan should fit almost any project need. For this walkthrough, let’s select SUSE Linux (free eligible). It is important to note that just because the image itself is free, that does not mean all the options available within that image are free. Notice on this screen that Amazon has pre-selected the only free-tier option available for this image. From this screen you are given two options: (Review and Launch) or (Next Configure Instance Details).  Let’s try Review and Launch to see what occurs. Notice that our Step process advanced to Step 7. Amazon gives you a soft warning regarding the state of the build and potential risk. If you are okay with these risks, you can proceed and launch your server. It is important to note that the Amazon build process is user driven. It will allow you to build a server with these potential risks in your cloud. It is recommended that you carefully consider each screen before proceeding. In this instance,select Previous and not Cancel to return to Step 3. Selecting Cancelwill stop the build process and return you to the AWS main services page. Until you actually launch your server, nothing is built or saved. There are information bubbles for each line in Step 3: Configure Instance Details. Review the content of each bubble, make any changes if needed, and then proceed to the next step. Select the storage size; then select Next Tag Instance. Enter Values and Continue or Learn More for further information. Select the Next: Configure Security Group button. Security is an extremely important part of setting up your virtual server. It is recommended that you speak to your security administrator to determine the best option. For source, it is recommended that you avoid using the Anywhereoption. This selection will put your build at risk. Select my IP or custom IP as shown. If you are involved in a self-study plan, you can select the Learn More link to determine the best option. Next: Review and Launch The full details of this screen be expanded, reviewed or edited. If everything appears to be okay,proceed to Launch. One additional screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to the Launch Instances. One more screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to Launch Instances to see the build process. You can access your new server from the EC2 Dashboard. This example of a build process gives you a window into how the  AWS build process works. The other objects and services have a similar step-through process. Once you have launched your server, you should be able to access it and proceed with your development. Additional details for development are also available through the site. Amazon’s Web Services Platform is an all-in-one solution for your graduation to the cloud. Not only can you manage your technical environment, but also it has features that allow you to manage your budget. By setting up your virtual applicances and servers appropriately, you can maximize the value of the first  12 months of your free-tier. Carefully monitoring activities through alerts and notification will help you to avoid having any billing surprises. Going through the tutorials and visting the online community will only aid to increase your knowledge base of AWS. AWS is inviting everyone to test their services on this exciting platform, so I would definitely recommend taking advantage of it. Have fun! About the author Cheryl Adams is a senior cloud data andinfrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 8857

article-image-adding-charts-dashboards
Packt
19 Aug 2016
11 min read
Save for later

Adding Charts to Dashboards

Packt
19 Aug 2016
11 min read
In this article by Vince Sesto, author of the book Learning Splunk Web Framework, we will study adding charts to dashboards. We have a development branch to work from and we are going to work further with the SimpleXMLDashboard dashboard. We should already be on our development server environment as we have just switched over to our new development branch. We are going to create a new bar chart, showing the daily NASA site access for our top educational users. We will change the label of the dashboard and finally place an average overlay on top of our chart: (For more resources related to this topic, see here.) Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for all our dashboards: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views We are going to work on the simplexmldashboard.xml file. Open this file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. It is not compulsory to indent and nest your Simple XML code, but it is a good idea to have consistent indentation and commenting to make sure your code is clear and stays as readable as possible. Let's start by changing the name of the dashboard that is displayed to the user. Change line 2 to the following line of code (don't include the line numbers): 2   <label>Educational Site Access</label> Move down to line 16 and you will see that we have closed off our row element with a </row>. We are going to add in a new row where we will place our new chart. After the sixteenth line, add the following three lines to create a new row element, a new panel to add our chart, and finally, open up our new chart element: 17 <row> 18 <panel> 19 <chart> The next two lines will give our chart a title and we can then open up our search: 20 <title>Top Educational User</title> 21 <search> To create a new search, just like we would enter in the Splunk search bar, we will use the query tag as listed with our next line of code. In our search element, we can also set the earliest and latest times for our search, but in this instance we are using the entire data source: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay </query> 23 <earliest>0</earliest> 24 <latest></latest> 25 </search> We have completed our search and we can now modify the way the chart will look on our panel with the option chart elements. In our next four lines of code, we set the chart type as a column chart, set the legend to the bottom of the chart area, remove any master legend, and finally set the height as 250 pixels: 26 <option name="charting.chart">column</option> 27 <option name="charting.legend.placement">bottom</option> 28 <option name="charting.legend.masterLegend">null</option> 29 <option name="height">250px</option> We need to close off the chart, panel, row, and finally the dashboard elements. Make sure you only close off the dashboard element once: 30 </chart> 31 </panel> 32 </row> 33 </dashboard>  We have done a lot of work here. We should be saving and testing our code for every 20 or so lines that we add, so save your changes. And as we mentioned earlier in the article, we want to refresh our cache by entering the following URL into our browser: http://<host:port>/debug/refresh When we view our page, we should see a new column chart at the bottom of our dashboard showing the usage per day for the calclab1.math.tamu.edu domain. But we’re not done with that chart yet. We want to put a line overlay showing the average site access per day for our user. Open up simplexmldashboard.xml again and change our query in line 22 to the following: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay| eventstats avg(count) as average | eval average=round(average,0)</query> Simple XML contains some special characters, which are ', <, >, and &. If you intend to use advanced search queries, you may need to use these characters, and if so you can do so by either using their HTML entity or using the CDATA tags where you can wrap your query with <![CDATA[ and ]]>. We now need to add two new option lines into our Simple XML code. After line 29, add the following two lines, without replacing all of the closing elements that we previously entered. The first will set the chart overlay field to be displayed for the average field; the next will set the color of the overlay: 30 <option name="charting.chart.overlayFields">average</option> 31 <option name="charting.fieldColors">{"count": 0x639BF1, "average":0xFF5A09}</option>    Save your new changes, refresh the cache, and then reload your page. You should be seeing something similar to the following screenshot: The Simple XML of charts As we can see from our example, it is relatively easy to create and configure our charts using Simple XML. When we completed the chart, we used five options to configure the look and feel of the chart, but there are many more that we can choose from. Our chart element needs to always be in between its two parent elements, which are row and panel. Within our chart element, we always start with a title for the chart and a search to power the chart. We can then set out additional optional settings for earliest and latest, and then a list of options to configure the look and feel as we have demonstrated below. If these options are not specified, default values are provided by Splunk: 1 <chart> 2 <title></title> 3 <search> 4 <query></query> 5 <earliest>0</earliest> 6 <latest></latest> 7 </search> 8 <option name=""></option> 9 </chart> There is a long list of options that can be set for our charts; the following is a list of the more important options to know: charting.chart: This is where you set the chart type, with area, bar, bubble, column, fillerGauge, line, markerGauge, pie, radialGauge, and scatter being the charts that you can choose from. charting.backgroudColor: Set the background color of your chart with a Hex color value. charting.drilldown: Set to either all or none. This allows the chart to be clicked on to allow the search to be drilled down for further information. charting.fieldColors: This can map a color to a field as we did with our average field in the preceding example. charting.fontColor: Set the value of the font color in the chart with a Hex color value. height: The height of the chart in pixels. The value must be between 100 and 1,000 pixels. A lot of the options seem to be self-explanatory, but a full list of options and a description can be found on the Splunk reference material at the following URL: http://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference. Expanding our Splunk App with maps We will now go through another example in our NASA Squid and Web Data App to run through a more complex type of visualization to present to our user. We will use the Basic Dashboard that we created, but we will change the Simple XML to give it a more meaningful name, and then set up a map to present to our users where our requests are actually coming from. Maps use a map element and don't rely on the chart element as we have been using. The Simple XML code for the dashboard we created earlier in this article looks like the following: <dashboard> <label>Basic Dashboard</label> </dashboard> So let's get to work and give our Basic Dashboard a little "bling": Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for our Basic Dashboard: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views Open the basic_dashboard.xml file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. We might as well remove all of the code that is in there, because it is going to look completely different than the way it did originally. Now start by setting up your dashboard and label elements, with a label that will give you more information on what the dashboard contains: 1 <dashboard> 2 <label>Show Me Your Maps</label> Open your row, panel, and map elements, and set a title for the new visualization. Make sure you use the map element and not the chart element: 3 <row> 4 <panel> 5 <map> 6 <title>User Locations</title>       We can now add our search query within our search elements. We will only search for IP addresses in our data and use the geostats Splunk function to extract a latitude and longitude from the data: 7 <search> 8 <query>index=main sourcetype="nasasquidlogs" | search From=1* | iplocation From | geostats latfield=lat longfield=lon count by From</query> 9 <earliest>0</earliest> 10 <latest></latest> 11 </search>  The search query that we have in our Simple XML code is more advanced than the previous queries we have implemented. If you need further details on the functions provided in the query, please refer to the Splunk search documentation at the following location: http://docs.splunk.com/Documentation/Splunk/6.4.1/SearchReference/WhatsInThisManual. Now all we need to do is close off all our elements, and that is all that is needed to create our new visualization of IP address requests: visualization of IP address requests: 12 </map> 13 </panel> 14 </row> 15 </dashboard> If your dashboard looks similar to the image below, I think it looks pretty good. But there is more we can do with our code to make it look even better. We can set extra options in our Simple XML code to zoom in, only display a certain part of the map, set the size of the markers, and finally set the minimum and maximum that can be zoomed into the screen. The map looks pretty good, but it seems that a lot of the traffic is being generated by users in USA. Let's have a look at setting some extra configurations in our Simple XML to change the way the map displays to our users. Get back to our basic_dashboard.xml file and add the following options: After our search element is closed off, we can add the following options. First we will set the maximum clusters to be displayed on our map as 100. This will hopefully speed up our map being displayed, and allow all the data points to be viewed further with the drilldown option: 12 <option name="mapping.data.maxClusters">100</option> 13 <option name="mapping.drilldown">all</option> We can now set our central point for the map to load using latitude and longitude values. In this instance, we are going to set the heart of USA as our central point. We are also going to set our zoom value as 4, which will zoom in a little further from the default of 2: 14 <option name="mapping.map.center">(38.48,-102)</option> 15 <option name="mapping.map.zoom">4</option> Remember that we need to have our map, panel, row, and dashboard elements closed off. Save the changes and reload the cache. Let's see what is now displayed: Your map should now be displaying a little faster than what it originally did. It will be focused on USA, where a bulk of the traffic is coming from. The map element has numerous options to use and configure and a full list can be found at the following Splunk reference page: http://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML. Summary In this article we covered about Simple XML charts and how to expand our Splunk App with maps. Resources for Article: Further resources on this subject: The Splunk Web Framework [article] Splunk's Input Methods and Data Feeds [article] The Splunk Interface [article]
Read more
  • 0
  • 0
  • 10620

article-image-running-your-applications-aws
Cheryl Adams
17 Aug 2016
4 min read
Save for later

Running Your Applications with AWS

Cheryl Adams
17 Aug 2016
4 min read
If you’ve ever been told not to run with scissors, you should not have the same concern when running with AWS. It is neither dangerous nor unsafe when you know what you are doing and where to look when you don’t. Amazon’s current service offering, AWS (Amazon Web Services), is a collection of services, applications and tools that can be used to deploy your infrastructure and application environment to the cloud.  Amazon gives you the option to start their service offerings with a ‘free tier’ and then move toward a pay as you go model.  We will highlight a few of the features when you open your account with AWS. One of the first things you will notice is that Amazon offers a bulk of information regarding cloud computing right up front. Whether you are a novice, amateur or an expert in cloud computing, Amazon offers documented information before you create your account.  This type of information is essential if you are exploring this tool for a project or doing some self-study on your own. If you are a pre-existing Amazon customer, you can use your same account to get started with AWS. If you want to keep your personal account separate from your development or business, it would be best to create a separate account. Amazon Web Services Landing Page The Free Tier is one of the most attractive features of AWS. As a new account you are entitled to twelve months within the Free Tier. In addition to this span of time, there are services that can continue after the free tier is over. This gives the user ample time to explore the offerings within this free-tier period. The caution is not to exceed the free service limitations as it will incur charges. Setting up the free-tier still requires a credit card. Fee-based services will be offered throughout the free tier, so it is important not to select a fee-based charge unless you are ready to start paying for it. Actual paid use will vary based on what you have selected.   AWS Service and Offerings (shown on an open account)     AWS overview of services available on the landing page Amazon’s service list is very robust. If you are already considering AWS, hopefully this means you are aware of what you need or at least what you would like to use. If not, this would be a good time to press pause and look at some resource-based materials. Before the clock starts ticking on your free-tier, I would recommend a slow walk through the introductory information on this site to ensure that you are selecting the right mix of services before creating your account. Amazon’s technical resources has a 10-minute tutorial that gives you a complete overview of the services. Topics like ‘AWS Training and Introduction’ and ‘Get Started with AWS’ include a list of 10-minute videos as well as a short list of ‘how to’ instructions for some of the more commonly used features. If you are a techie by trade or hobby, this may be something you want to dive into immediately.In a company, generally there is a predefined need or issue that the organization may feel can be resolved by the cloud.  If it is a team initiative, it would be good to review the resources mentioned in this article so that everyone is on the same page as to what this solution can do.It’s recommended before you start any trial, subscription or new service that you have a set goal or expectation of why you are doing it. Simply stated, a cloud solution is not the perfect solution for everyone.  There is so much information here on the AWS site. It’s also great if you are comparing between competing cloud service vendors in the same space. You will be able to do a complete assessment of most services within the free-tier. You can map use case scenarios to determine if AWS is the right fit for your project. AWS First Project is a great place to get started if you are new to AWS. If you are wondering how to get started, these technical resources will set you in the right direction. By reviewing this information during your setup or before you start, you will be able to make good use out of your first few months and your introduction to AWS. About the author Cheryl Adams is a senior cloud data and infrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 12861

article-image-exception-handling-python
Packt
17 Aug 2016
10 min read
Save for later

Exception Handling with Python

Packt
17 Aug 2016
10 min read
In this article, by Ninad Sathaye, author of the book, Learning Python Application Development, you will learn techniques to make the application more robust by handling exceptions Specifically, we will cover the following topics: What are the exceptions in Python? Controlling the program flow with the try…except clause Dealing with common problems by handling exceptions Creating and using custom exception classes (For more resources related to this topic, see here.) Exceptions Before jumping straight into the code and fixing these issues, let's first understand what an exception is and what we mean by handling an exception. What is an exception? An exception is an object in Python. It gives us information about an error detected during the program execution. The errors noticed while debugging the application were unhandled exceptions as we didn't see those coming. Later in the article,you will learn the techniques to handle these exceptions. The ValueError and IndexErrorexceptions seen in the earlier tracebacks are examples of built-in exception types in Python. In the following section, you will learn about some other built-in exceptions supported in Python. Most common exceptions Let's quickly review some of the most frequently encountered exceptions. The easiest way is to try running some buggy code and let it report the problem as an error traceback! Start your Python interpreter and write the following code: Here are a few more exceptions: As you can see, each line of the code throws a error tracebackwith an exception type (shown highlighted). These are a few of the built-in exceptions in Python. A comprehensive list of built-in exceptions can be found in the following documentation:https://docs.python.org/3/library/exceptions.html#bltin-exceptions Python provides BaseException as the base class for all built-in exceptions. However, most of the built-in exceptions do not directly inherit BaseException. Instead, these are derived from a class called Exception that in turn inherits from BaseException. The built-in exceptions that deal with program exit (for example, SystemExit) are derived directly from BaseException. You can also create your own exception class as a subclass of Exception. You will learn about that later in this article. Exception handling So far, we saw how the exceptions occur. Now, it is time to learn how to use thetry…except clause to handle these exceptions. The following pseudocode shows a very simple example of the try…except clause: Let's review the preceding code snippet: First, the program tries to execute the code inside thetryclause. During this execution, if something goes wrong (if an exception occurs), it jumps out of this tryclause. The remaining code in the try block is not executed. It then looks for an appropriate exception handler in theexceptclause and executes it. The exceptclause used here is a universal one. It will catch all types of exceptions occurring within thetryclause. Instead of having this "catch-all" handler, a better practice is to catch the errors that you anticipate and write an exception handling code specific to those errors. For example, the code in thetryclause might throw an AssertionError. Instead of using the universalexcept clause, you can write a specific exception handler, as follows: Here, we have an except clause that exclusively deals with AssertionError. What it also means is that any error other than the AssertionError will slip through as an unhandled exception. For that, we need to define multipleexceptclauses with different exception handlers. However, at any point of time, only one exception handler will be called. This can be better explained with an example. Let's take a look at the following code snippet: Thetry block calls solve_something(). This function accepts a number as a user input and makes an assertion that the number is greater than zero. If the assertion fails, it jumps directly to the handler, except AssertionError. In the other scenario, with a > 0, the rest of the code in solve_something() is executed. You will notice that the variable xis not defined, which results in NameError. This exception is handled by the other exception clause, except NameError. Likewise, you can define specific exception handlers for anticipated errors. Raising and re-raising an exception Theraisekeyword in Python is used to force an exception to occur. Put another way, it raises an exception. The syntax is simple; just open the Python interpreter and type: >>> raise AssertionError("some error message") This produces the following error traceback: Traceback (most recent call last): File "<stdin>", line 1, in <module> AssertionError : some error message In some situations, we need to re-raise an exception. To understand this concept better, here is a trivial scenario. Suppose, in thetryclause, you have an expression that divides a number by zero. In ordinary arithmetic, this expression has no meaning. It's a bug! This causes the program to raise an exception called ZeroDivisionError. If there is no exception handling code, the program will just print the error message and terminate. What if you wish to write this error to some log file and then terminate the program? Here, you can use anexceptclause to log the error first. Then, use theraisekeyword without any arguments to re-raise the exception. The exception will be propagated upwards in the stack. In this example, it terminates the program. The exception can be re-raised with the raise keyword without any arguments. Here is an example that shows how to re-raise an exception: As can be seen, adivision by zeroexception is raised while solving the a/b expression. This is because the value of variable b is set to 0. For illustration purposes, we assumed that there is no specific exception handler for this error. So, we will use the general except clause where the exception is re-raised after logging the error. If you want to try this yourself, just write the code illustrated earlier in a new Python file, and run it from a terminal window. The following screenshot shows the output of the preceding code: The else block of try…except There is an optionalelseblock that can be specified in the try…except clause. The elseblock is executed only ifno exception occurs in the try…except clause. The syntax is as follows: Theelseblock is executed before thefinallyclause, which we will study next. finally...clean it up! There is something else to add to the try…except…else story:an optional finally clause. As the name suggests, the code within this clause is executed at the end of the associated try…except block. Whether or not an exception is raised, the finally clause, if specified, willcertainly get executed at the end of thetry…except clause. Imagine it as anall-weather guaranteegiven by Python! The following code snippet shows thefinallyblock in action: Running this simple code will produce the following output: $ python finally_example1.py Enter a number: -1 Uh oh..Assertion Error. Do some special cleanup The last line in the output is theprintstatement from the finally clause. The code snippets with and without the finally clause are are shown in the following screenshot. The code in the finallyclause is assured to be executed in the end, even when the except clause instructs the code to return from the function. Thefinallyclause is typically used to perform clean-up tasks before leaving the function. An example use case is to close a database connection or a file. However, note that, for this purpose you can also use thewith statement in Python. Writing a new exception class It is trivial to create a new exception class derived from Exception. Open your Python interpreter and create the following class: >>> class GameUnitError(Exception): ... pass ... >>> That's all! We have a new exception class,GameUnitError, ready to be deployed. How to test this exception? Just raise it. Type the following line of code in your Python interpreter: >>> raise GameUnitError("ERROR: some problem with game unit") Raising the newly created exception will print the following traceback: >>> raise GameUnitError("ERROR: some problem with game unit") Traceback (most recent call last): File "<stdin>", line 1, in <module> __main__.GameUnitError: ERROR: some problem with game unit Copy the GameUnitError class into its own module, gameuniterror.py, and save it in the same directory as attackoftheorcs_v1_1.py. Next, update the attackoftheorcs_v1_1.py file to include the following changes: First, add the following import statement at the beginning of the file: from gameuniterror import GameUnitError The second change is in the AbstractGameUnit.heal method. The updated code is shown in the following code snippet. Observe the highlighted code that raises the custom exception whenever the value ofself.health_meterexceeds that of self.max_hp. With these two changes, run heal_exception_example.py created earlier. You will see the new exception being raised, as shown in the following screenshot: Expanding the exception class Can we do something more with the GameUnitError class? Certainly! Just like any other class, we can define attributes and use them. Let's expand this class further. In the modified version, it will accept an additional argument and some predefined error code. The updated GameUnitError class is shown in the following screenshot: Let's take a look at the code in the preceding screenshot: First, it calls the __init__method of the Exceptionsuperclass and then defines some additional instance variables. A new dictionary object,self.error_dict, holds the error integer code and the error information as key-value pairs. The self.error_message stores the information about the current error depending on the error code provided. The try…except clause ensures that error_dict actually has the key specified by thecodeargument. It doesn't in the except clause, we just retrieve the value with default error code of 000. So far, we have made changes to the GameUnitError class and the AbstractGameUnit.heal method. We are not done yet. The last piece of the puzzle is to modify the main program in the heal_exception_example.py file. The code is shown in the following screenshot: Let's review the code: As the heal_by value is too large, the heal method in the try clause raises the GameUnitError exception. The new except clause handles the GameUnitError exception just like any other built-in exceptions. Within theexceptclause, we have twoprintstatements. The first one prints health_meter>max_hp!(recall that when this exception was raised in the heal method, this string was given as the first argument to the GameUnitError instance). The second print statement retrieves and prints the error_message attribute of the GameUnitError instance. We have got all the changes in place. We can run this example form a terminal window as: $ python heal_exception_example.py The output of the program is shown in the following screenshot: In this simple example, we have just printed the error information to the console. You can further write verbose error logs to a file and keep track of all the error messages generated while the application is running. Summary This article served as an introduction to the basics of exception handling in Python. We saw how the exceptions occur, learned about some common built-in exception classes, and wrote simple code to handle these exceptions using thetry…except clause. The article also demonstrated techniques, such as raising and re-raising exceptions, using thefinally clause, and so on. The later part of the article focused on implementing custom exception classes. We defined a new exception class and used it for raising custom exceptions for our application. With exception handling, the code is in a better shape. Resources for Article: Further resources on this subject: Mining Twitter with Python – Influence and Engagement [article] Exception Handling in MySQL for Python [article] Python LDAP applications - extra LDAP operations and the LDAP URL library [article]
Read more
  • 0
  • 0
  • 14238
article-image-preprocessing-data
Packt
16 Aug 2016
5 min read
Save for later

Preprocessing the Data

Packt
16 Aug 2016
5 min read
In this article, by Sampath Kumar Kanthala, the author of the book Practical Data Analysis discusses how to obtain, clean, normalize, and transform raw data into a standard format like CVS or JSON using OpenRefine. In this article we will cover: Data Scrubbing Statistical methods Text Parsing Data Transformation (For more resources related to this topic, see here.) Data scrubbing Scrubbing data also called data cleansing, is the process of correcting or removing data in a dataset that is incorrect, inaccurate, incomplete, improperly formatted, or duplicated. The result of the data analysis process not only depends on the algorithms, it depends on the quality of the data. That's why the next step after obtaining the data, is the data scrubbing. In order to avoid dirty data our dataset should possess the following characteristics: Correct Completeness Accuracy Consistency Uniformity Dirty data can be detected by applying some simple statistical data validation also by parsing the texts or deleting duplicate values. Missing or sparse data can lead you to highly misleading results. Statistical methods In this method we need some context about the problem (knowledge domain) to find values that are unexpected and thus erroneous, even if the data type match but the values are out of the range, it can be resolved by setting the values to an average or mean value. Statistical validations can be used to handle missing values which can be replaced by one or more probable values using Interpolation or by reducing the data set using decimation. Mean: Value calculated by summing up all values and then dividing by the number of values. Median: The median is defined as the value where 50% of values in a range will be below, 50% of values above the value. Range constraints: Numbers or dates should fall within a certain range. That is, they have minimum and/or maximum possible values. Clustering: Usually, when we obtain data directly from the user some values include ambiguity or refer to the same value with a typo. For example, "Buchanan Deluxe 750ml 12x01 "and "Buchanan Deluxe 750ml   12x01." which are different only by a "." or in the case of "Microsoft" or "MS" instead of "Microsoft Corporation" which refer to the same company and all values are valid. In those cases, grouping can help us to get accurate data and eliminate duplicated enabling a faster identification of unique values. Text parsing We perform parsing to help us to validate if a string of data is well formatted and avoid syntax errors. Regular expression patterns usually, text fields would have to be validated this way. For example, dates, e-mail, phone numbers, and IP address. Regex is a common abbreviation for "regular expression"): In Python we will use re module to implement regular expressions. We can perform text search and pattern validations. First, we need to import the re module. import re In the follow examples, we will implement three of the most common validations (e-mail, IP address, and date format). E-mail validation: myString = 'From: readers@packt.com (readers email)' result = re.search('([w.-]+)@([w.-]+)', myString) if result: print (result.group(0)) print (result.group(1)) print (result.group(2)) Output: >>> readers@packt.com >>> readers >>> packt.com The function search() scans through a string, searching for any location where the Regex matches. The function group() helps us to return the string matched by the Regex. The pattern w matches any alphanumeric character and is equivalent to the class [a-zA-Z0-9_]. IP address validation: isIP = re.compile('d{1,3}.d{1,3}.d{1,3}.d{1,3}') myString = " Your IP is: 192.168.1.254 " result = re.findall(isIP,myString) print(result) Output: >>> 192.168.1.254 The function findall() finds all the substrings where the Regex matches, and returns them as a list. The pattern d matches any decimal digit, is equivalent to the class [0-9]. Date format: myString = "01/04/2001" isDate = re.match('[0-1][0-9]/[0-3][0-9]/[1-2][0-9]{3}', myString) if isDate: print("valid") else: print("invalid") Output: >>> 'valid' The function match() finds if the Regex matches with the string. The pattern implements the class [0-9] in order to parse the date format. For more information about regular expressions: http://docs.python.org/3.4/howto/regex.html#regex-howto Data transformation Data transformation is usually related with databases and data warehouse where values from a source format are extract, transform, and load in a destination format. Extract, Transform, and Load (ETL) obtains data from data sources, performs some transformation function depending on our data model and loads the result data into destination. Data extraction allows us to obtain data from multiple data sources, such as relational databases, data streaming, text files (JSON, CSV, XML), and NoSQL databases. Data transformation allows us to cleanse, convert, aggregate, merge, replace, validate, format, and split data. Data loading allows us to load data into destination format, like relational databases, text files (JSON, CSV, XML), and NoSQL databases. In statistics data transformation refers to the application of a mathematical function to the dataset or time series points. Summary In this article, we explored the common data sources and implemented a web scraping example. Next, we introduced the basic concepts of data scrubbing like statistical methods and text parsing. Resources for Article:   Further resources on this subject: MicroStrategy 10 [article] Expanding Your Data Mining Toolbox [article] Machine Learning Using Spark MLlib [article]
Read more
  • 0
  • 0
  • 8387

article-image-server-side-rendering
Packt
16 Aug 2016
5 min read
Save for later

Server-Side Rendering

Packt
16 Aug 2016
5 min read
In this article by Kamil Przeorski, the author of the book Mastering Full Stack React Web Development introduces Universal JavaScript or isomorphic JavaScript features that we are going to implement in thisarticle. To be more exact: we will develop our app the way that we will render the app's pages both on server and client side. It's different to Angular1 or Backbone single-page apps which are mainly rendered on the client side. Our approach is more complicated in technological terms as you need to deploy your full-stack skills which working on a server side rendering, but on the other-side having this experience will make you more desirable programmer so you can advance your career to the next level—you will be able to charge more for your skills on the market. (For more resources related to this topic, see here.) When the server-side is worth implementing h1 The server-side rendering is very useful feature in text's content (like news portals) related startups/companies because it helps to be better indexed by different search engines. It's an essential feature for any news and content heavy websites, because it helps grow them organic traffic. In this article, we will also run our app with server-side rendering. Second segment of companies where server-side rendering may be very useful are entertainment one where users have less patience and they can close the www's browser if a webpage is loading slowly. In general, all B2C (consumer facing) apps shall use server-side rendering to improve its experience with the masses of people who are visiting their websites. Our focus for article will include the following: Making whole server-side code rearrangement to prepare for the server-side rendering Start using react-dom/server and it's renderToString method Are you ready? Our first step is to mock the database's response on the backend (we will create a real DB query after whole server-side rendering will work correctly on the mocked data). Mocking the database response h2 First of all, we will mock our database response on the backend in order to get prepared to go into server-side rendering directly. $ [[you are in the server directory of your project]] $ touch fetchServerSide.js The fetchServerSide.js file will consist of all functions that will fetch data from our database in order to make the server side works.As was mentioned earlier we will mock it for the meanwhile with following code in fetchServerSide.js: export default () => { return { 'article': { '0': { 'articleTitle': 'SERVER-SIDE Lorem ipsum - article one', 'articleContent':'SERVER-SIDE Here goes the content of the article' }, '1': { 'articleTitle':'SERVER-SIDE Lorem ipsum - article two', 'articleContent':'SERVER-SIDE Sky is the limit, the content goes here.' } } } } The goal of making this mocked object once again, is that we will be able to see if our server-side rendering works correctly after implementation because as you probably have already spotted that we have added this SERVER-SIDE in the beginning of each title and content—so it will help us to learn that our app is getting the data from server-side rendering. Later this function will be replaced with a query to MongoDB. Next thing that will help us implement the server-side rendering is to make a handleServerSideRender function that will be triggered each time a request hits the server. In order to make the handleServerSideRender trigger every time the frontend calls our backend we need to use the Express middleware using app.use. So far we were using some external libraries like: app.use(cors()) app.use(bodyParser.json({extended: false})) Now, we will write our own small's middleware function that behaves similar way to the cors or bodyParser (the external libs that are also middlewares). Before doing so, let's import our dependencies that are required in React's server-side rendering (server/server.js): import React from 'react'; import {createStore} from 'redux'; import {Provider} from 'react-redux'; import {renderToStaticMarkup} from 'react-dom/server'; import ReactRouter from 'react-router'; import {RoutingContext, match} from 'react-router'; import * as hist from 'history'; import rootReducer from '../src/reducers'; import reactRoutes from '../src/routes'; import fetchServerSide from './fetchServerSide'; After adding all those imports of the server/server.js, the file will be looking as following: import http from 'http'; import express from 'express'; import cors from 'cors'; import bodyParser from 'body-parser'; import falcor from 'falcor'; import falcorExpress from 'falcor-express'; import falcorRouter from 'falcor-router'; import routes from './routes.js'; import React from 'react' import { createStore } from 'redux' import { Provider } from 'react-redux' import { renderToStaticMarkup } from 'react-dom/server' import ReactRouter from 'react-router'; import { RoutingContext, match } from 'react-router'; import * as hist from 'history'; import rootReducer from '../src/reducers'; import reactRoutes from '../src/routes'; import fetchServerSide from './fetchServerSide'; Important is to import history in the given way as in the example import * as hist from 'history'. The RoutingContext, match is the way of using React-Router on the server side. The renderToStaticMarkup function is going to generate for us a HTML markup on serverside. After we have added those new imports then under falcor's middleware setup: // this already exists in your codebase app.use('/model.json', falcorExpress.dataSourceRoute((req, res) => { return new falcorRouter(routes); // this alrady exsits in your codebase })); Under themodel.jsonfile's code, please add the following: let handleServerSideRender = (req, res) => { return; }; let renderFullHtml = (html, initialState) => { return; };app.use(handleServerSideRender); The app.use(handleServerSideRender) is fired each time the server side receives a request from a client's application. Then we have prepared empty functions that we will use: handleServerSideRender:It will use renderToString in order to create a valid server-side's HTML's markup renderFullHtml:The helper's function will embed our new React's HTML markup into a whole HTML's document as you can later in a moment down below. Summary We have done the basic server-side rendering in this article. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] How to integrate social media with your WordPress website [article] Laravel 5.0 Essentials [article]
Read more
  • 0
  • 0
  • 28839

article-image-introducing-vsphere-vmotion
Packt
16 Aug 2016
5 min read
Save for later

Introducing vSphere vMotion

Packt
16 Aug 2016
5 min read
In this article by Abhilash G B and Rebecca Fitzhugh author of the book Learning VMware vSphere, we are mostly going to be talking about howvSphere vMotion is a VMware technology used to migrate a running virtual machine from one host to another without altering its power-state. The beauty of the whole process is that it is transparent to the applications running inside the virtual machine. In this section we will understand the inner workings of vMotion and learn how to configure it. There are different types of vMotion, such as: Compute vMotion Storage vMotion Unified vMotion Enhanced vMotion (X-vMotion) Cross vSwitch vMotion Cross vCenter vMotion Long Distance vMotion (For more resources related to this topic, see here.) Compute vMotion is the default vMotion method and is employed by other features such as DRS, FT and Maintenance Mode. When you initiate a vMotion, it initiates an iterative copy of all memory pages. After the first pass, all the dirtied memory pages are copied again by doing another pass and this is done iteratively until the amount of pages left over to be copied is small enough to be transferred and to switch over the state of the VM to the destination host. During the switch over, the virtual machine's device state are transferred and resumed at the destination host.You can initiate up to 8 simultaneous vMotion operations on a single host. Storage vMotion is used to migrate the files backing a virtual machine (virtual disks, configuration files, logs) from one datastore to another while the virtual machine is still running. When you initiate a storage vMotion, it starts a sequential copy of source disk in 64 MB chunks. While a region is being copied, all the writes issued to that region are deferred until the region is copied. An already copied source region is monitored for further writes. If there is a write I/O, then it will be mirrored to the destination disk as well. This process of mirror writes to the destination virtual disk continues until the sequential copy of the entire source virtual disk is complete. Once the sequential copy is complete, all subsequent READS/WRITES are issued to the destination virtual disk. Keep in mind though that while the sequential copy is still in progress all the READs are issued to the source virtual disk. Storage vMotion is used be Storage DRS. You initiate up to 2 simultaneous SvMotion operations on a single host. Unified vMotion is used to migrate both the running state of a virtual machine and files backing it from one host and datastore to another. Unified vMotion uses a combination of both Compute and Storage vMotion to achieve the migration. First, the configuration files and the virtual disks are migrated and only then the migration of live state of the virtual machine will begin. You can initiate up to 2 simultaneous Unified vMotion operations on a single host. Enhanced vMotion (X-vMotion) is used to migrate virtual machine between hosts that do not share storage. Both the virtual machine's running state and the files backing it are transferred over the network to the destination. The migration procedure is same as the compute and storage vMotion. In fact, Enhanced vMotion uses Unified vMotion to achieve the migration. Since the memory and disk states are transferred over vMotion network, ESXi hosts maintain a transmit buffer at the source and a receive buffer at the destination. The transmit buffer collects and places data on to the network, while the receive buffer will collect data received via the network and flushes it to the storage. You can initiate up to 2 simultaneous X-vMotion operations on a single host. Cross vSwitch vMotion allows you to choose a destination port group for the virtual machine. It is important to note that unless the destination port group supports the same L2 network, the virtual machine will not be able to communicate over the network. Cross vSwitch vMotion allows changing from Standard vSwitch to VDS, but not from VDS to Standard vSwitch. vSwitch to vSwitch and VDS to VDS is supported. Cross vCenter vMotion allows migrating virtual machines beyond the vCenter's boundary. This is a new enhancement with vSphere 6.0. However, for this to be possible both the vCenter's should be in the same SSO Domain and should be in Enhanced Linked Mode. Infrastructure requirement for Cross vCenter vMotion has been detailed in the VMware Knowledge Base article 2106952 at the following link:http://kb.vmware.com/kb/2106952. Long Distance vMotion allows migrating virtual machines over distances with a latency not exceeding 150 milliseconds. Prior to vSphere 6.0, the maximum supported network latency for vMotion was 10 milliseconds. Using the provisioning interface You can configure a Provisioning Interface to send all non-active data of the virtual machine being migrated. Prior to vSphere 6.0, vMotion used the vmkernel interface which has the default gateway configured on it (which in most cases is the management interface vmk0) to transfer non-performance impacting vMotion data. Non-performance impacting vMotion data includes the Virtual Machine's home directory, older delta in the snapshot chain, base disks etc. Only the live data will hit the vMotion interface. The Provisioning Interface is nothing but a vmkernel interface with Provisioning Traffic enabled on this. The procedure to do this is very similar to how you would configure a vmkernel interface for Management or vMotion traffic. You will have to edit the settings of the intended vmk interface and set Provisioning traffic as the enabled service: It is important to keep in mind that the provisioning interface is not just meant for VMotion data, but if enabled it will be used for cold migrations, cloning operations and virtual machine snapshots. The provisioning interface can be configured to use a different gateway other than vmkernel's default gateway. Further resources on this subject: Cloning and Snapshots in VMware Workstation [article] Essentials of VMware vSphere [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 12197
article-image-optimizing-games-android
Packt
16 Aug 2016
13 min read
Save for later

Optimizing Games for Android

Packt
16 Aug 2016
13 min read
In this article by Avisekhar Roy, the author of The Android Game Developer's Handbook, we will focus on the need for optimization and types of optimization with respect to games for the Android OS. We will also look at some common game development mistakes in this article. (For more resources related to this topic, see here.) The rendering pipeline in Android Let's now have a look at the types of rendering pipelines in Android. The 2D rendering pipeline In thecase of the 2D Android drawing system through Canvas, all the assets are first drawn on the canvas, and the canvas is rendered on screen. The graphic engine maps all the assets within the finite Canvas according to the given position. Many times, developers use small assets separately that cause a mapping instruction to execute for each asset. It is always recommended that you use sprite sheets to merge as many small assets as possible. A single draw call can then be applied to draw every object on the Canvas. Now, the question is how to create the sprite and what are the other consequences. Previously, Android could not support images or sprites of a size more than 1024x1024 pixels. Since Android 2.3, the developer can use a sprite of size 4096x4096. However, using such sprites can cause permanent memory occupancy during the scopes of all the small assets. Many low-configuration Android devices do not support such large images to be loaded during an application. It is a best practice that developers limit themselves to 2048x2048 pixels. This will reduce memory peak as well as significant amount of draw calls to the canvas. The 3D rendering pipeline Android uses OpenGL to render assets on the screen. So,the rendering pipeline for Android 3D is basically theOpenGL pipeline. Let's have look at the OpenGL rendering system: Now, let's have a detailed look at each step of thepreceding rendering flow diagram: The vertex shader processes individual vertices with vertex data. The control shader is responsible for controlling vertex data and patches for the tessellation. The polygon arrangement system arranges the polygon with each pair of intersecting lines created by vertices. Thus, it creates the edges without repeating vertices. Tessellation is the process of tiling the polygons in a shape without overlap or any gap. The geometry shader is responsible for optimizing the primitive shape. Thus triangles are generated. After constructing the polygons and shapes, the model is clipped for optimization. Vertex post processing is used to filter out unnecessary data. The mesh is then rasterized. The fragment shader is used to process fragments generated from rasterization. All the pixels are mapped after fragmentation and processed with the processed data. The mesh is added to the frame buffer for final rendering. Optimizing 2D assets Any digital game cannot be made without 2D art assets. There must be 2D assets in some form inside the game. So, as far as game component optimization is concerned, every 2D asset should also be optimized. Optimization of 2D assets means these three main things. Size optimization Each asset frame should contain only the effective pixels to be used in games. Unnecessary pixels increase the asset size and memory use during runtime. Data optimization Not all images require full data information for pixels. A significant amount of data might be stored in each pixel, depending on the image format. For example, fullscreen opaque images should never contain transparency data. Similarly, depending on the color set, images must be formatted in 8-bit, 16-bit, or 24-bit format. Image optimization tools can be used to perform such optimizations. Process optimization The larger the amount of data compressed during optimization, the more time it takes to decompress it and load it to memory. So, image optimization has a direct effect on the processing speed. From another point of view, creating an image atlas or sprite sheet is another way to reduce the processing time of images. Optimizing 3D assets A 3D art asset has two parts to be optimized. A 2D texture part is to be optimized in the same 2D optimization style. The only thing the developer needs to consider is after optimization, the shader should have the same effect on the structure. Rest of the 3D asset optimization entirely depends on the number of vertices and the model polygon. Limiting polygon count It is very obvious that a large number of polygons used to create the mesh can create more details. However, we all know that Android is a mobile OS, and it always has hardware limitations. The developer should count the number of polygons used in the mesh and the total number of polygons rendered on the screen in a single draw cycle. There is always a limitation depending on the hardware configuration. So, limiting polygon and vertex count per mesh is always an advantage in order to achieve a certain frame rate or performance. Model optimization Models are created with more than one mesh. Using a separate mesh in the final model always results in heavy processing. This is a major effort for the game artist. Multiple overlaps can occur if multiple meshes are used. This increases vertex processing. Rigging is another essential part of finalizing the model. A good rigger defines the skeleton with minimum possible joints for minimum processing. Common game development mistakes It is not always possible to look into each and every performance aspect at every development stage. It is a very common practice to use assets and write code in a temporary mode and use it in the final game. This affects the overall performance and future maintenance procedure. Here are few of the most common mistakes made during game development. Use of non-optimized images An artist creates art assets, and the developer directly integrates those into the game for debug build. However, most of the time, those assets are never optimized even for the release candidate. This is the reason there may be plenty of high-bit images where the asset contains limited information. Alpha information may be found in opaque images. Use of full utility third-party libraries The modernday development style does not require each and every development module to be written from scratch. Most of the developers use a predefined third-party library for common utility mechanisms. Most of the time, these packages come with most of the possible methods, and among them, very few are actually used in games. Developers, most of the time, use these packages without any filtration. A lot of unused data occupies memory during runtime in such cases. Many times, a third-party library comes without any editing facility. In this case, the developer should choose such packages very carefully depending on specific requirements. Use of unmanaged networking connections In modern Android games, use of Internet connectivity is very common. Many games use server-based gameplay. In such cases, the entire game runs on the server with frequent data transfers between the server and the client device.Each data transfer process takes time, and the connectivity drains battery charge significantly. Badly managed networking states often freeze the application. Especially for real-time multiplayer games, a significant amount of data is handled. In this case, a request and response queue should be created and managed properly. However, the developer often skips this part to save development time. Another aspect of unmanaged connections is unnecessary packet data transferred between the server and client. So, there is an extra parsing process involved each time data is transferred. Using substandard programming We have already discussed programming styles and standards. The modular programming approach may increase a few extra processes, but the longer management of programming demandsmodular programming. Otherwise, developers end up repeating code, and this increases process overhead. Memory management also demands a good programming style. In few cases, the developer allocates memory but often forgets to free the memory. This causes a lot of memory leakage. At times,the application crashes due to insufficient memory. Substandard programming includes the following mistakes: Declaring the same variables multiple times Creating many static instances Writing non-modular coding Improper singleton class creation Loading objects at runtime Taking the shortcut This is the funniest fact among ill-practiced development styles. Taking a shortcut during development is very common among game developers. Making games is mostly about logical development. There may be multiple ways of solving a logical problem. Very often, the developer chooses the most convenient way to solve such problems.For example, the developer mostly uses the bubble sorting method for most of the sorting requirements, despite knowing that it is the most inefficient sorting process. Using such shortcuts multiple times in a game may cause a visible process delay, which directly affects frame rate. 2D/3D performance comparison Android game development in 2D and 3D is different. It is a fact that 3D game processing is heavier than 2D games. However, game scale is always the deciding factor. Different look and feel 3D look and feel is way different than 2D. Use of a particle system in 3D games is very common to provide visual effects. In the case of 2D games, sprite animation and other transformations are used to show such effects. Another difference between 2D and 3D look and feel is dynamic light and shadow. Dynamic light is always a factor for greater visual quality. Nowadays, most 3D games use dynamic lighting, which has a significant effect on game performance. In the case of 2D games, light management is done through assets. So, there is no extra processing in 2D games for light and shadow. In 2D games, the game screen is rendered on a Canvas. There is only one fixed point of view. So, the concept of camera is limited to a fixed camera. However, in 3D games, it is a different case. Multiple types of cameras can be implemented. Multiple cameras can be used together for a better feel of the game. Rendering objects through multiple cameras causes more process overhead. Hence, it decreases the frame rate of the game. There is a significant performance difference in using 2D physics and 3D physics. A 3D physics engine is far more process heavy than a 2D physics engine. 3D processing is way heavier than 2D processing It is a common practice in the gaming industry to accept less FPS of 3D games in comparison to 2D games. In Android, the standard accepted FPS for 2D games is around 60FPS, whereas a 3D game is acceptable even if it runs at as low as 40FPS. The logical reason behind that is 3D games are way heavier than 2D games in terms of process. The main reasons are as follows: Vertex processing: In 3D games, each vertex is processed on the OpenGL layer during rendering. So, increasing the number of vertices leads to heavier processing. Mesh rendering: A mesh consists of multiple vertices and many polygons. Processing a mesh increasesthe rendering overhead as well. 3D collision system: A 3D dynamic collisiondetection system demands each vertex of the collider to be calculated for collision. This calculation is usually done by the GPU. 3D physics implementation: 3D transformation calculation completely depends on matrix manipulation, which is always heavy. Multiple camera use: Use of multiple cameras and dynamically setting up the rendering pipeline takes more memory and clock cycles. Device configuration Android has a wide range of device configuration options supported by the platform. Running the same game on different configurations does not produce the same result. Performance depends on the following factors. Processor There are many processors used for Android devices in terms of the number of cores and speed of each core. Speed decides the number of instructions that can be executed in a single cycle.There was a time when Android used to have a single core CPU with speed less than 500MHz. Now.we have multicore CPUs with more than 2GHz speed on each core. RAM Availability of RAM is another factor to decide performance. Heavy games require a greater amount of RAM during runtime. If RAM is limited, then frequent loading/unloading processesaffect performance. GPU GPU decides the rendering speed. It acts as the processing unit for graphical objects. A more powerful processor can process more rendering instructions, resulting in better performance. Display quality Display quality is actually inversely proportional to the performance. Better display quality has to be backed by better GPU, CPU, and RAM, because better displays always consist of bigger resolution with better dpi and more color support. We can see various devices with different display quality. Android itself has divided the assets by this feature: LDPI: Lowest dpi display for Android (~120 dpi) MDPI: Medium dpi display for Android (~160 dpi) HDPI: High dpi display for Android (~240 dpi) XHDPI: Extra high dpi display for Android (~320 dpi) XXHDPI: Extra extra high dpi display for Android (~480 dpi) XXXHDPI: Extra extraextra high dpi display for Android (~640 dpi) It can be easily predicted that the list will include more options in the near future, with the advancement of hardware technology. Battery capacity It is an odd factor in the performance of the application. More powerful CPU, GPU, and RAM demand more power. If the battery is incapable of delivering power, then processing units cannot run at their peak efficiency. To summarize these factors, we can easily make a few relational equations with performance: CPU is directly proportional to performance GPU is directly proportional to performance RAM is directly proportional to performance Display quality is inversely proportional to performance Battery capacity is directly proportional to performance Summary There are many technical differences between 2D and 3D games in terms of rendering, processing, and assets. The developer should always use an optimized approach to create assets and write code. One more way of gaining performance is to port the games for different hardware systems for both 2D and 3D games. We can see a revolutionary upgrade in hardware platforms since the last decade. Accordingly, the nature of games has also changed. However, the scope of 2D games is still there with a large set of possibilities. Gaining performance is more of a logical task than technical. There are a few tools available to do the job, but it is the developer's decision to choose them. So, selecting the right tool for the right purpose is necessary, and there should be a different approach for making 2D and 3D games. Resources for Article: Further resources on this subject: Drawing and Drawables in Android Canvas [article] Hacking Android Apps Using the Xposed Framework [article] Getting started with Android Development [article]
Read more
  • 0
  • 0
  • 14607

article-image-creating-your-first-plug
Packt
16 Aug 2016
31 min read
Save for later

Creating Your First Plug-in

Packt
16 Aug 2016
31 min read
Eclipse – an IDE for everything and nothing in particular. Eclipse is a highly modular application consisting of hundreds of plugins, and can be extended by installing additional plugins. Plugins are developed and debugged with the Plugin Development Environment (PDE). In this article by Dr Alex Blewitt, author of the book, Eclipse Plug-in Development: Beginner's Guide - Second Edition, covers: Set up an Eclipse environment for doing plug-in development Create a plug-in with the new plug-in wizard Launch a new Eclipse instance with the plug-in enabled Debug the Eclipse plug-in (For more resources related to this topic, see here.) Getting started Developing plug-ins requires an Eclipse development environment. This has been developed and tested on Eclipse Mars 4.5 and Eclipse Neon 4.6, which was released in June 2016. Use the most recent version available. Eclipse plug-ins are generally written in Java. Although it's possible to use other JVM-based languages (such as Groovy or Scala). There are several different packages of Eclipse available from the downloads page, each of which contains a different combination of plug-ins. This has been tested with: Eclipse SDK from http://download.eclipse.org/eclipse/downloads/ Eclipse IDE for Eclipse Committers from http://www.eclipse.org/downloads/ These contain the necessary Plug-in Development Environment (PDE) feature as well as source code, help documentation, and other useful features. It is also possible to install the Eclipse PDE feature in an existing Eclipse instance. To do this, go to the Help menu and select Install New Software, followed by choosing the General Purpose Tools category from the selected update site. The Eclipse PDE feature contains everything needed to create a new plug-in. Time for action – setting up the Eclipse environment Eclipse is a Java-based application; it needs Java installed. Eclipse is distributed as a compressed archive and doesn't require an explicit installation step: To obtain Java, go to http://java.com and follow the instructions to download and install Java. Note that Java comes in two flavors: a 32-bit installation and a 64-bit installation. If the running OS is 32-bit, then install the 32-bit JDK; alternatively, if the running OS is 64-bit, then install the 64-bit JDK. Running java -version should give output like this: java version "1.8.0_92" Java(TM) SE Runtime Environment (build 1.8.0_92-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.92-b14, mixed mode) Go to http://www.eclipse.org/downloads/ and select the Eclipse IDE for Eclipse Committers distribution. Download the one that matches the installed JDK. Running java -version should report either of these:    If it's a 32-bit JDK: Java HotSpot(TM) Client VM    If it's a 64-bit JDK: Java HotSpot(TM) 64-Bit Server VM On Linux, Eclipse requires GTK+ 2 or 3 to be installed. Most Linux distributions have a window manager based on GNOME, which provides GTK+ 2 or 3. To install Eclipse, download and extract the contents to a suitable location. Eclipse is shipped as an archive, and needs no administrator privileges to install. Do not run it from a networked drive as this will cause performance problems. Note that Eclipse needs to write to the folder where it is extracted, so it's normal that the contents are writable afterwards. Generally, installing in /Applications or C:Program Files as an administrator account is not recommended. Run Eclipse by double-clicking on the Eclipse icon, or by running eclipse.exe (Windows), eclipse (Linux), or Eclipse.app (macOS). On startup, the splash screen will be shown:   Choose a workspace, which is the location in which projects are to be stored, and click on OK:   Close the welcome screen by clicking on the cross in the tab next to the welcome text. The welcome screen can be reopened by navigating to Help | Welcome: What just happened? Eclipse needs Java to run, and so the first step involved in installing Eclipse is ensuring that an up-to-date Java installation is available. By default, Eclipse will find a copy of Java installed on the path or from one of the standard locations. It is also possible to specify a different Java by using the -vm command-line argument. If the splash screen doesn't show, then the Eclipse version may be incompatible with the JDK (for example, a 64-bit JDK with a 32-bit Eclipse, or vice versa). Common error messages shown at the launcher may include Unable to find companion launcher or a cryptic message about being unable to find an SWT library. On Windows, there is an additional eclipsec.exe launcher that allows log messages to be displayed on the console. This is sometimes useful if Eclipse fails to load and no other message is displayed. Other operating systems can use the eclipse command; and both support the -consolelog argument, which can display more diagnostic information about problems with launching Eclipse. The Eclipse workspace is a directory used for two purposes: as the default project location, and to hold the .metadata directory containing Eclipse settings, preferences, and other runtime information. The Eclipse runtime log is stored in the .metadata/.log file. The workspace chooser dialog has an option to set a default workspace. It can be changed within Eclipse by navigating to File | Switch Workspace. It can also be overridden by specifying a different workspace location with the -data command-line argument. Finally, the welcome screen is useful for first-time users, but it is worth closing (rather than minimizing) once Eclipse has started. Creating your first plug-in In this task, Eclipse's plug-in wizard will be used to create a plug-in. Time for action – creating a plug-in In PDE, every plug-in has its own individual project. A plug-in project is typically created with the new project wizard, although it is also possible to upgrade an existing Java project to a plug-in project by adding the PDE nature and the required files by navigating to Configure | Convert to plug-in project. To create a Hello World plug-in, navigate to File | New | Project…. The project types shown may be different from this list but should include Plug-in Project with Eclipse IDE for Eclipse Committers or Eclipse SDK. If nothing is shown when you navigate to File | New, then navigate to Window | Open Perspective | Other | Plug-in Development first; the entries should then be seen under the New menu. Choose Plug-in Project and click on Next. Fill in the dialog as follows: Project name should be com.packtpub.e4.hello.ui. Ensure that Use default location is selected. Ensure that Create a Java project is selected. The Eclipse version should be targeted to 3.5 or greater: Click on Next again, and fill in the plug-in properties: ID is set to com.packtpub.e4.hello.ui. Version is set to 1.0.0.qualifier. Name is set to Hello. Vendor is set to PacktPub. For Execution Environment, use the default (for example, JavaSE-1.8). Ensure that Generate an Activator is selected. Set Activator to com.packtpub.e4.hello.ui.Activator. Ensure that This plug-in will make contributions to the UI is selected. Rich client application should be No: Click on Next and a set of templates will be provided:    Ensure that Create a plug-in using one of the templates is selected.    Choose the Hello, World Command template:   Click on Next to customize the sample, including:    Java Package Name, which defaults to the project's name followed by .handlers    Handler Class Name, which is the code that gets invoked for the action Message Box Text, which is the message to be displayed: Finally, click on Finish and the project will be generated. If an Open Associated Perspective? dialog asks, click on Yes to show the Plug-in Development perspective. What just happened? Creating a plug-in project is the first step towards creating a plug-in for Eclipse. The new plug-in project wizard was used with one of the sample templates to create a project. Plug-ins are typically named in reverse domain name format, so these examples will be prefixed with com.packtpub.e4. This helps to distinguish between many plug-ins; the stock Eclipse IDE for Eclipse Committers comes with more than 450 individual plug-ins; the Eclipse-developed ones start with org.eclipse. Conventionally, plug-ins that create additions to (or require) the use of the UI have .ui. in their name. This helps to distinguish those that don't, which can often be used headlessly. Of the more than 450 plug-ins that make up the Eclipse IDE for Eclipse Committers, approximately 120 are UI-related and the rest are headless. The project contains a number of files that are automatically generated based on the content filled in the wizard. The key files in an Eclipse plug-in are: META-INF/MANIFEST.MF: The MANIFEST.MF file, also known as the OSGi manifest, describes the plug-in's name, version, and dependencies. Double-clicking on it will open a custom editor, which shows the information entered in the wizards; or it can be opened in a standard text editor. The manifest follows standard Java conventions; line continuations are represented by a newline followed by a single space character, and the file must end with a newline. plugin.xml: The plugin.xml file declares what extensions the plug-in provides to the Eclipse runtime. Not all plug-ins need a plugin.xml file; headless (non-UI) plug-ins often don't need to have one. Extension points will be covered in more detail later; but the sample project creates an extension for the commands, handlers, bindings, and menus' extension points. Text labels for the commands/actions/menus are represented declaratively in the plugin.xml file, rather than programmatically; this allows Eclipse to show the menu before needing to load or execute any code. This is one of the reasons Eclipse starts so quickly; by not needing to load or execute classes, it can scale by showing what's needed at the time, and then load the class on demand when the user invokes the action. Java Swing's Action class provides labels and tooltips programmatically, which can result in slower initialization of Swing-based user interfaces. build.properties: The build.properties file is used by PDE at development time and at build time. Generally it can be ignored, but if resources are added that need to be made available to the plug-in (such as images, properties files, HTML content and more), then an entry must be added here as otherwise it won't be found. Generally the easiest way to do this is by going to the Build tab of the build.properties file, which will gives a tree-like view of the project's contents. This file is an archaic hangover from the days of Ant builds, and is generally useless when using more up-to-date builds such as Maven Tycho. Pop quiz – Eclipse workspaces and plugins Q1. What is an Eclipse workspace? Q2. What is the naming convention for Eclipse plug-in projects? Q3. What are the names of the three key files in an Eclipse plug-in? Running plug-ins To test an Eclipse plug-in, Eclipse is used to run or debug a new Eclipse instance with the plug-in installed. Time for action – launching Eclipse from within Eclipse Eclipse can launch a new Eclipse application by clicking on the run icon, or via the Run menu: Select the plug-in project in the workspace. Click on the run icon  to launch the project. The first time this happens, a dialog will be shown; subsequent launches will remember the chosen type:   Choose the Eclipse Application type and click on OK. A new Eclipse instance will be launched. Close the Welcome page in the launched application, if shown. Click on the hello world icon in the menu bar, or navigate to Sample Menu | Sample Command from the menu, and the dialog box created via the wizard will be shown:   Quit the target Eclipse instance by closing the window, or via the usual keyboard shortcuts or menus (Cmd + Q on macOS or Alt + F4 on Windows). What just happened? Upon clicking on run  in the toolbar (or via Run | Run As | Eclipse Application) a launch configuration is created, which includes any plug-ins open in the workspace. A second copy of Eclipse—with its own temporary workspace—will enable the plug-in to be tested and verify that it works as expected. The run operation is intelligent, in that it launches an application based on what is selected in the workspace. If a plug-in is selected, it will offer the opportunity to run as an Eclipse Application; if a Java project with a class with a main method, it will run it as a standard Java Application; and if it has tests, then it will offer to run the test launcher instead. However, the run operation can also be counter-intuitive; if clicked a second time, and in a different project context, then something other than the expected launched might be run. A list of the available launch configurations can be seen by going to the Run menu, or by going to the dropdown to the right of the run icon. The Run | Run Configurations menu shows all the available types, including any previously run: By default, the runtime workspace is kept between runs. The launch configuration for an Eclipse application has options that can be customized; in the preceding screenshot, the Workspace Data section in the Main tab shows where the runtime workspace is stored, and an option is shown that allows the workspace to be cleared (with or without confirmation) between runs. Launch configurations can be deleted by clicking on the red delete icon on the top left, and new launch configurations can be created by clicking on the new icon. Each launch configuration has a type: Eclipse Application Java Applet Java Application JUnit JUnit Plug-in Test OSGi Framework The launch configuration can be thought of as a pre-canned script that can launch different types of programs. Additional tabs are used to customize the launch, such as the environment variables, system properties, or command-line arguments. The type of the launch configuration specifies what parameters are required and how the launch is executed. When a program is launched with the run icon, changes to the project's source code while it is running have no effect. However, as we'll see in the next section, if launched with the debug icon, changes can take effect. If the target Eclipse is hanging or otherwise unresponsive, in the host Eclipse instance, the Console view (shown by navigating to Window | View | Show View | Other | General | Console menu) can be used to stop the target Eclipse instance. Pop quiz: launching Eclipse Q1. What are the two ways of terminating a launched Eclipse instance? Q2. What are launch configurations? Q3. How are launch configurations created and deleted? Have a go hero – modifying the plug-in Now that the Eclipse plug-in is running, try the following: Change the message of the label and title of the dialog box to something else Invoke the action by using the keyboard shortcut (defined in plugin.xml) Change the tooltip of the action to a different message Switch the action icon to a different graphic (if a different filename is used, remember to update it in plugin.xml and build.properties) Debugging a plug-in Since it's rare that everything works first time, it's often necessary to develop iteratively, adding progressively more functionality each time. Secondly, it's sometimes necessary to find out what's going on under the cover when trying to fix a bug, particularly if it's a hard-to-track-down exception such as NullPointerException. Fortunately, Eclipse comes with excellent debugging support, which can be used to debug both standalone Java applications as well as Eclipse plug-ins. Time for action – debugging a plug-in Debugging an Eclipse plug-in is much the same as running an Eclipse plug-in, except that breakpoints can be used, the state of the program can be updated, and variables and minor changes to the code can be made. Rather than debugging plug-ins individually, the entire Eclipse launch configuration is started in debug mode. That way, all the plug-ins can be debugged at the same time. Although run mode is slightly faster, the added flexibility of being able to make changes makes debug mode much more attractive to use as a default. Start the target Eclipse instance by navigating to Debug | Debug As | Eclipse Application, or by clicking on debug  in the toolbar: Click on the hello world icon in the target Eclipse to display the dialog, as before, and click on OK to dismiss it. In the host Eclipse, open the SampleHandler class and go to the first line of the execute method. Add a breakpoint by double-clicking in the vertical ruler (the grey/blue bar on the left of the editor), or by pressing Ctrl + Shift + B (or Cmd + Shift + B on macOS). A blue dot representing the breakpoint will appear in the ruler: Click on the hello world icon in the target Eclipse to display the dialog, and the debugger will pause the thread at the breakpoint in the host Eclipse:   The debugger perspective will open whenever a breakpoint is triggered and the program will be paused. While it is paused, the target Eclipse is unresponsive. Any clicks on the target Eclipse application will be ignored, and it will show a busy cursor. In the top right, variables that are active in the line of code are shown. In this case, it's just the implicit variables (via this), any local variables (none yet), as well as the parameter (in this case, event). Click on Step Over or press F6, and window will be added to the list of available variables: When ready to continue, click on resume  or press F8 to keep running. What just happened? The built-in Eclipse debugger was used to launch Eclipse in debug mode. By triggering an action that led to a breakpoint, the debugger was revealed, allowing the local variables to be inspected. When in the debugger, there are several ways to step through the code: Step Over: This allows stepping over line by line in the method Step Into: This follows the method calls recursively as execution unfolds There is also a Run | Step into Selection menu item; it does not have a toolbar icon. It can be invoked with Ctrl + F5 (Alt + F5 on macOS) and is used to step into a specific expression. Step Return: This jumps to the end of a method Drop to Frame: This returns to a stack frame in the thread to re-run an operation Time for action – updating code in the debugger When an Eclipse instance is launched in run mode, changes made to the source code aren't reflected in the running instance. However, debug mode allows changes made to the source to be reflected in the running target Eclipse instance: Launch the target Eclipse in debug mode by clicking on the debug icon. Click on the hello world icon in the target Eclipse to display the dialog, as before, and click on OK to dismiss it. It may be necessary to remove or resume the breakpoint in the host Eclipse instance to allow execution to continue. In the host Eclipse, open the SampleHandler class and go to the execute method. Change the title of the dialog to Hello again, Eclipse world and save the file. Provided the Build Automatically option in Project menu is enabled, the change will be automatically recompiled. Click on the hello world icon in the target Eclipse instance again. The new message should be shown. What just happened? By default, Eclipse ships with the Build Automatically option in Project menu enabled. Whenever changes are made to Java files, they are recompiled along with their dependencies if necessary. When a Java program is launched in run mode, it will load classes on demand and then keep using that definition until the JVM shuts down. Even if the classes are changed, the JVM won't notice that they have been updated, and so no differences will be seen in the running application. However, when a Java program is launched in debug mode, whenever changes to classes are made, Eclipse will update the running JVM with the new code if possible. The limits to what can be replaced are controlled by the JVM through the Java Virtual Machine Tools Interface (JVMTI). Generally, updating an existing method and adding a new method or field will work, but changes to interfaces and superclasses may not be. The Hotspot JVM cannot replace classes if methods are added or interfaces are updated. Some JVMs have additional capabilities that can substitute more code on demand. Other JVMs, such as IBM's, can deal with a wider range of replacements. Note that there are some types of changes that won't be picked up, for example, new extensions added to the plugin.xml file. In order to see these changes, it is possible to start and stop the plug-in through the command-line OSGi console, or restart Eclipse inside or outside of the host Eclipse to see the change. Debugging with step filters When debugging using Step Into, the code will frequently go into Java internals, such as the implementation of Java collections classes or other internal JVM classes. These don't usually add value, so fortunately Eclipse has a way of ignoring uninteresting classes. Time for action – setting up step filtering Step filters allow for uninteresting packages and classes to be ignored during step debugging: Run the target Eclipse instance in debug mode. Ensure that a breakpoint is set at the start of the execute method of the SampleHandler class. Click on the hello world icon, and the debugger should open at the first line, as before. Click on Step Into five or six times. At each point, the code will jump to the next method in the expression, first through various methods in HandlerUtil and then into ExecutionEvent. Click on resume   to continue. Open Preferences and then navigate to Java | Debug | Step Filtering. Select the Use Step Filters option. Click on Add Package and enter org.eclipse.ui, followed by a click on OK:   Click on the hello world icon again. Click on Step Into as before. This time, the debugger goes straight to the getApplicationContext method in the ExecutionEvent class. Click on resume   to continue. To make debugging more efficient by skipping accessors, go back to the Step Filters preference and select Filter Simple Getters from the Step Filters preferences page. Click on the hello world icon again. Click on Step Into as before. Instead of going into the getApplicationContext method, the execution will drop through to the getVariable method of the ExpressionContext class instead. What just happened? Step Filters allows uninteresting packages to be skipped, at least from the point of debugging. Typically, JVM internal classes (such as those beginning with sun or sunw) are not helpful when debugging and can easily be ignored. This also avoids debugging through the ClassLoader as it loads classes on demand. Typically it makes sense to enable all the default packages in the Step Filters dialog, as it's pretty rare to need to debug any of the JVM libraries (internal or public interfaces). This means that when stepping through code, if a common method such as toString is called, debugging won't step through the internal implementation. It also makes sense to filter out simple setters and getters (those that just set a variable or those that just return a variable). If the method is more complex (like the getVariable method previously), then it will still stop in the debugger. Constructors and static initializers can also be filtered specifically. Using different breakpoint types Although it's possible to place a breakpoint anywhere in a method, a special breakpoint type exists that can fire on method entry, exit, or both. Breakpoints can also be customized to only fire in certain situations or when certain conditions are met. Time for action – breaking at method entry and exit Method breakpoints allow the user to see when a method is entered or exited: Open the SampleHandler class, and go to the execute method. Double-click in the vertical ruler at the method signature, or select Toggle Method Breakpoint from the method in one of the Outline, Package Explorer or Members views. The breakpoint should be shown on the line: public Object execute(...) throws ExecutionException { Open the breakpoint properties by right-clicking on the breakpoint or via the Breakpoints view, which is shown in the Debug perspective. Set the breakpoint to trigger at method entry and method exit. Click on the hello world icon again. When the debugger stops at method entry, click on resume . When the debugger stops at method exit, click on resume . What just happened? The breakpoint triggers at the time the method enters and subsequently when the method's return is reached. Note that the exit is only triggered if the method returns normally; if an uncaught exception is raised, it is not treated as a normal method exit, and so the breakpoint won't fire. Other than the breakpoint type, there's no significant difference between creating a breakpoint on method entry and creating one on the first statement of the method. Both give the ability to inspect the parameters and do further debugging before any statements in the method itself are called. The method exit breakpoint will only trigger once the return statement is about to leave the method. Thus any expression in the method's return value will have been evaluated prior to the exit breakpoint firing. Compare and contrast this with the line breakpoint, which will wait to evaluate the argument of the return statement. Note that Eclipse's Step Return has the same effect; this will run until the method's return statement is about to be executed. However, to find when a method returns, using a method exit breakpoint is far faster than stopping at a specific line and then doing Step Return. Using conditional breakpoints Breakpoints are useful since they can be invoked on every occasion when a line of code is triggered. However, they sometimes need to break for specific actions only—such as when a particular option is set, or when a value has been incorrectly initialized. Fortunately, this can be done with conditional breakpoints. Time for action – setting a conditional breakpoint Normally breakpoints fire on each invocation. It is possible to configure breakpoints such that they fire when certain conditions are met; these are known as conditional breakpoints: Go to the execute method of the SampleHandler class. Clear any existing breakpoints, by double-clicking on them or using Remove All Breakpoints from the Breakpoints view. Add a breakpoint to the first line of the execute method body. Right-click on the breakpoint, and select the Breakpoint Properties menu (it can also be shown by Ctrl + double-clicking—or Cmd + double-clicking in macOS—on the breakpoint icon itself):   Set Hit Count to 3, and click on OK. Click on the hello world icon button three times. On the third click, the debugger will open up at that line of code. Open the breakpoint properties, deselect Hit Count, and select the Enabled and Conditional options. Put the following line into the conditional trigger field: ((org.eclipse.swt.widgets.Event)event.trigger).stateMask==65536 Click on the hello world icon, and the breakpoint will not fire. Hold down Alt + click on the hello world icon, and the debugger will open (65536 is the value of SWT.MOD3, which is the Alt key). What just happened? When a breakpoint is created, it is enabled by default. A breakpoint can be temporarily disabled, which has the effect of removing it from the flow of execution. Disabled breakpoints can be easily re-enabled on a per breakpoint basis, or from the Breakpoints view. Quite often it's useful to have a set of breakpoints defined in the code base, but not necessarily have them all enabled at once. It is also possible to temporarily disable all breakpoints using the Skip All Breakpoints setting, which can be changed from the corresponding item in the Run menu (when the Debug perspective is shown) or the corresponding icon in the Breakpoints view. When this is enabled, no breakpoints will be fired. Conditional breakpoints must return a value. If the breakpoint is set to break whether or not the condition is true, it must be a Boolean expression. If the breakpoint is set to stop whenever the value changes, then it can be any Java expression. Multiple statements can be used provided that there is a return keyword with a value expression. Using exceptional breakpoints Sometimes when debugging a program, an exception occurs. Typically this isn't known about until it happens, when an exception message is printed or displayed to the user via some kind of dialog box. Time for action – catching exceptions Although it's easy to put a breakpoint in the catch block, this is merely the location where the failure was ultimately caught, not where it was caused. The place where it was caught can often be in a completely different plug-in from where it was raised, and depending on the amount of information encoded within the exception (particularly if it has been transliterated into a different exception type) may hide the original source of the problem. Fortunately, Eclipse can handle such cases with a Java Exception Breakpoint: Introduce a bug into the execute method of the SampleHandler class, by adding the following just before the MessageDialog.openInformation() call: window = null; Click on the hello world icon. Nothing will appear to happen in the target Eclipse, but in the Console view of the host Eclipse instance, the error message should be seen: Caused by: java.lang.NullPointerException   at com.packtpub.e4.hello.ui.handlers.SampleHandler.execute   at org.eclipse.ui.internal.handlers.HandlerProxy.execute   at org.eclipse.ui.internal.handlers.E4HandlerProxy.execute Create a Java Exception Breakpoint in the Breakpoints view of the Debug perspective. The Add Java Exception Breakpoint dialog will be shown:   Enter NullPointerException in the search dialog, and click on OK. Click on the hello world icon, and the debugger will stop at the line where the exception is thrown, instead of where it is caught: What just happened? The Java Exception Breakpoint stops when an exception is thrown, not when it is caught. The dialog asks for a single exception class to catch, and by default, the wizard has been pre-filled with any class whose name includes *Exception*. However, any name (or filter) can be typed into the search box, including abbreviations such as FNFE for FileNotFoundException. Wildcard patterns can also be used, which allows searching for Nu*Ex or *Unknown*. By default, the exception breakpoint corresponds to instances of that specific class. This is useful (and quick) for exceptions such as NullPointerException, but not so useful for ones with an extensive class hierarchy, such as IOException. In this case, there is a checkbox visible on the breakpoint properties and at the bottom of the breakpoints view, which allows the capture of all Subclasses of this exception, not just of the specific class. There are also two other checkboxes that say whether the debugger should stop when the exception is Caught or Uncaught. Both of these are selected by default; if both are deselected, then the breakpoint effectively becomes disabled. Caught means that the exception is thrown in a corresponding try/catch block, and Uncaught means that the exception is thrown without a try/catch block (this bubbles up to the method's caller). Time for action – inspecting and watching variables Finally, it's worth seeing what the Variables view can do: Create a breakpoint at the start of the execute method. Click on the hello world icon again. Highlight the openInformation call and navigate to Run | Step Into Selection. Select the title variable in the the Variables view. Modify where it says Hello in the bottom half of the variables view and change it to Goodbye:   Save the value with Ctrl + S (or Cmd + S on macOS). Click on resume, and the newly updated title can be seen in the dialog. Click on the hello world icon again. With the debugger stopped in the execute method, highlight the event in the Variables view. Right-click on the value and choose Inspect (by navigating to Ctrl + Shift + I or Cmd + Shift + I on macOS) and the value is opened in the Expressions view: Click on Add new expression at the bottom of the Expressions view. Add new java.util.Date() and the right-hand side will show the current time. Right-click on the new java.util.Date() and choose Re-evaluate Watch Expression. The right-hand-side pane shows the new value. Step through the code line by line, and notice that the watch expression is re-evaluated after each step. Disable the watch expression by right-clicking on it and choosing Disable. Step through the code line by line, and the watch expression will not be updated. What just happened? The Eclipse debugger has many powerful features, and the ability to inspect (and change) the state of the program is one of the more important ones. Watch expressions, when combined with conditional breakpoints, can be used to find out when data becomes corrupted or used to show the state of a particular object's value. Expressions can also be evaluated based on objects in the variables view, and code completion is available to select methods, with the result being shown with Display. Pop quiz: debugging Q1. How can an Eclipse plug-in be launched in debug mode? Q2. How can certain packages be avoided when debugging? Q3. What are the different types of breakpoints that can be set? Q4. How can a loop that only exhibits a bug after 256 iterations be debugged? Q5. How can a breakpoint be set on a method when its argument is null? Q6. What does inspecting an object do? Q7. How can the value of an expression be calculated? Q8. How can multiple statements be executed in breakpoint conditions? Have a go hero – working with breakpoints Using a conditional breakpoint to stop at a certain method is fine if the data is simple, but sometimes there needs to be more than one expression. Although it is possible to use multiple statements in the breakpoint condition definition, the code is not very reusable. To implement additional reusable functionality, the breakpoint can be delegated to a breakpoint utility class: Create a Utility class in the com.packtpub.e4.hello.ui.handlers package with a static, breakpoint that returns a true value if the breakpoint should stop, and false otherwise: public class Utility {   public static boolean breakpoint() {     System.out.println("Breakpoint");     return false;   } } Create a conditional breakpoint in the execute method that calls Utility.breakpoint(). Click on the hello world icon again, and the message will be printed to the host Eclipse's Console view. The breakpoint will not stop. Modify the breakpoint method to return true instead of false. Run the action again. The debugger will stop. Modify the breakpoint method to take the message as an argument, along with a Boolean value that is returned to say whether the breakpoint should stop. Set up a conditional breakpoint with the expression: Utility.breakpoint(  ((org.eclipse.swt.widgets.Event)event.trigger).stateMask != 0, "Breakpoint") Modify the breakpoint method to take a variable Object array, and use that in conjunction with the message to use String.format() for the resulting message: Utility.breakpoint(  ((org.eclipse.swt.widgets.Event)event.trigger).stateMask != 0,  "Breakpoint %s %h",  event,  java.time.Instant.now()) Summary In this article, we covered how to get started with Eclipse plug-in development. From downloading the right Eclipse package to getting started with a wizard-generated plug-in. Specifically, we learned these things: The Eclipse SDK and the Eclipse IDE for Eclipse Committers have the necessary plug-in development environment to get you started The plug-in creation wizard can be used to create a plug-in project, optionally using one of the example templates Testing an Eclipse plug-in launches a second copy of Eclipse with the plug-in installed and available for use Launching Eclipse in debug mode allows you to update code and stop execution at breakpoints defined via the editor Now that we've learned how to get started with Eclipse plug-ins, we're ready to look at creating plug-ins that contribute to the IDE, starting with SWT and Views. Resources for Article: Further resources on this subject: Apache Maven and m2eclipse [article] Installing and Setting up JavaFX for NetBeans and Eclipse IDE [article] JBoss AS plug-in and the Eclipse Web Tools Platform [article]
Read more
  • 0
  • 0
  • 1997
Modal Close icon
Modal Close icon