Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
Packt
15 Oct 2015
7 min read
Save for later

Mobile Phone Forensics: A First Step into Android Forensics

Packt
15 Oct 2015
7 min read
In this article by Michael Spreitzenbarth and Johann Uhrmann, authors of the book Mastering Python Forensics, we will see that even though the forensic analysis of the standard computer hardware—such as hard disks—has developed into a stable discipline with a lot of reference work, the techniques used to analyze the non-standard hardware or transient evidence are debatable. Despite their increasing role in digital investigations, smartphones are yet to be considered non-standard due of their heterogeneity. In all investigations, it is necessary to follow the basic forensic principles. The two main principles are as follows: Greatest care must be taken to ensure that the evidence is manipulated or changed as little as possible. The course of a digital investigation must be understandable and open to scrutiny. At best, the results of the investigation must be reproducible by independent investigators. The first principle is a challenge while the setting smartphones as most of them employ specific operating systems and hardware protection methods that prevent unrestricted access to the data on the system. The preservation of the data from the hard disks is, in most cases, a simple and well-known procedure. An investigator removes the hard disk from the computer or notebook, connects it to his workstation with the help of a write blocker (for example, Tableau's TK35) and starts analyzing it with well-known and certified software solutions. When this is compared to the smartphone world, it becomes clear that here is no such procedure. Nearly, every smartphone has its own way to build its storage; however, the investigators need their own way to get the storage dump on each smartphone. While it is difficult to get the data from a smartphone, one can get much more data with reference to the diversity of the data. Smartphones store, besides the usual data (for example, pictures and documents), the data such as GPS coordinates or the position of a mobile cell that the smartphone was connected to before it was switched off. Considering the resulting opportunities, it turns out that it is worth the extra expense for an investigator. In the following sections, we will show you how to start such analysis on an Android phone. Circumventing the screen lock It is always a good point to start the analysis by circumventing the screen lock. As you may know, it is very easy to get around the pattern lock on an Android smartphone. In this brief article, we will describe the following two ways to get around it: On a rooted smartphone With the help of the JTAG interface Some background information The pattern lock is entered by the users on joining the points on a 3×3 matrix in their chosen order. Since Android 2.3.3, this pattern involves a minimum of 4 points (on older Android versions, the minimum was 3 points) and each point can only be used once. The points of the matrix are registered in a numbered order starting from 0 in the upper left corner and ending with 8 in the bottom right corner. So the pattern of the lock screen in the following figure would be 0 – 3 – 6 – 7 – 8: Android stores this pattern in a special file called gesture.key in the /data/system/ directory. As storing the pattern in plain text is not safe, Android only stores an unsalted SHA1-hashsum of this pattern (refer to the following code snippet). Accordingly, our pattern is stored as the following hashsum: c8c0b24a15dc8bbfd411427973574695230458f0 The code snippet is as follows: private static byte[] patternToHash(List pattern) { if (pattern == null) { return null; } final int patternSize = pattern.size(); byte[] res = new byte[patternSize]; for (int i = 0; i < patternSize; i++) { LockPatternView.Cell cell = pattern.get(i); res[i] = (byte) (cell.getRow() * 3 + cell.getColumn()); } try { MessageDigest md = MessageDigest.getInstance("SHA-1"); byte[] hash = md.digest(res); return hash; } catch (NoSuchAlgorithmException nsa) { return res; } } Due to the fact that the pattern has a finite and a very small number of possible combinations, the use of an unsalted hash isn't very safe. It is possible to generate a dictionary (rainbow table) with all possible hashes and compare the stored hash with this dictionary within a few seconds. To ensure some more safety, Android stores the gesture.key file in a restricted area of the filesystem, which cannot be accessed by a normal user. If you have to get around it, there are two ways available to choose from. On a rooted smartphone If you are dealing with a rooted smartphone and the USB debugging is enabled, cracking the pattern lock is quite easy. You just have to dump the /data/system/gesture.key file and compare the bytes of this file with your dictionary. These steps can be automated with the help of the following Python script: #!/usr/bin/python # import hashlib, sqlite3 from binascii import hexlify SQLITE_DB = "GestureRainbowTable.db" def crack(backup_dir): # dumping the system file containing the hash print "Dumping gesture.key ..." saltdb = subprocess.Popen(['adb', 'pull', '/data/system/gesture.key', backup_dir], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) gesturehash = open(backup_dir + "/gesture.key", "rb").readline() lookuphash = hexlify(gesturehash).decode() print "HASH: \033[0;32m" + lookuphash + "\033[m" conn = sqlite3.connect(SQLITE_DB) cur = conn.cursor() cur.execute("SELECT pattern FROM RainbowTable WHERE hash = ?", (lookuphash,)) gesture = cur.fetchone()[0] return gesture if __name__ == '__main__': # check if device is connected and adb is running as root if subprocess.Popen(['adb', 'get-state'], stdout=subprocess.PIPE).communicate(0)[0].split("\n")[0] == "unknown": print "no device connected - exiting..." sys.exit(2) # starting to create the output directory and the crack file used for hashcat backup_dir = sys.argv[1] try: os.stat(backup_dir) except: os.mkdir(backup_dir) gesture = crack(backup_dir) print "Screenlock Gesture: \033[0;32m" + gesture + "\033[m"" With the help of the JTAG Interface If you are dealing with a stock or at least an unrooted smartphone, the whole process is a bit more complicated. First of all, you need special hardware such as a Riff-Box and an JIG adapter or some soldering skills. After you have gained a physical dump of the complete memory chip, the chase for the pattern lock can start. In our experiment, we used HTC Wildfire with a custom ROM and Android 2.3.3 flashed on it and the pattern lock that we just saw in this article. After looking at the dump, we noticed that every chunk has an exact size of 2048 bytes. Since we know that our gesture.key file has 20 bytes of data (SHA1-hashsum), we searched for this combination of bytes and noticed that there is one chunk starting with these 20 bytes, followed by 2012 bytes of zeroes and 16 random bytes (this seems to be some meta file system information). We did a similar search on several other smartphone dumps (mainly older Android versions on the same phone) and it turned out that this method is the easiest way to get the pattern lock without being root. In other words, we do the following: Try to get a physical dump of the complete memory Search for a chunk with the size of 2048 bytes, starting with 20 bytes random, followed by 2012 bytes zeroes and 16 bytes random Extract the 20 bytes random at the beginning of the chunk Compare these 20 bytes with your dictionary Type in the corresponding pattern on the smartphone Summary In this article, we demonstrated how to get around the screen lock on an Android smartphone if the user has chosen to use a pattern to protect the smartphone against unauthorized access. Circumventing this protection can help the investigators during the analysis of a smartphone in question, as they are now able to navigate through the apps and smartphone UI in order to search for interesting sources or to prove the findings. This step is only the first part of an investigation, if you would like to get more details about further steps and investigations on the different systems, the book Mastering Python Forensics could be a great place to start. Resources for Article: Further resources on this subject: Evidence Acquisition and Analysis from iCloud [article] BackTrack Forensics [article] Introduction to Mobile Forensics [article]
Read more
  • 0
  • 0
  • 14968

Packt
14 Oct 2015
10 min read
Save for later

Mastering Ansible – Protecting Your Secrets with Ansible

Packt
14 Oct 2015
10 min read
In this In this article by Jesse Keating, author of the book Mastering Ansible, we will see how to how to encrypt data at rest using Ansible. Secrets are meant to stay secret. Whether they are login credentials to a cloud service or passwords to database resources, they are secret for a reason. Should they fall into the wrong hands, they can be used to discover trade secrets and private customer data, create infrastructure for nefarious purposes, or worse. All of which could cost you or your organization a lot of time and money and cause headache! In this article, we cover how to keep your secrets safe with Ansible. Encrypting data at rest Protecting secrets while operating (For more resources related to this topic, see here.) Encrypting data at rest As a configuration management system or an orchestration engine, Ansible has great power. In order to wield that power, it is necessary to entrust secret data to Ansible. An automated system that prompts the operator for passwords all the time is not very efficient. To maximize the power of Ansible, secret data has to be written to a file that Ansible can read and utilize the data from within. This creates a risk though! Your secrets are sitting there on your filesystem in plain text. This is a physical and digital risk. Physically, the computer could be taken from you and pawed through for secret data. Digitally, any malicious software that can break the boundaries set upon it could read any data your user account has access to. If you utilize a source control system, the infrastructure that houses the repository is just as much at risk. Thankfully, Ansible provides a facility to protect your data at rest. That facility is Vault, which allows for encrypting text files so that they are stored "at rest" in encrypted format. Without the key or a significant amount of computing power, the data is indecipherable. The key lessons to learn while dealing with encrypting data at rest are: Valid encryption targets Creating new encrypted files Encrypting existing unencrypted files Editing encrypted files Changing the encryption password on files Decrypting encrypted files Running the ansible-playbook command to reference encrypted files Things Vault can encrypt Vault can be used to encrypt any structured data file used by Ansible. This is essentially any YAML (or JSON) file that Ansible uses during its operation. This can include: group_vars/ files host_vars/ files include_vars targets vars_files targets --extra-vars targets role variables Role defaults Task files Handler files If the file can be expressed in YAML and read by Ansible, it is a valid file to encrypt with Vault. Because the entire file will be unreadable at rest, care should be taken to not be overzealous in picking which files to encrypt. Any source control operations with the files will be done with the encrypted content, making it very difficult to peer review. As a best practice, the smallest amount of data possible should be encrypted; this may even mean moving some variables into a file all by themselves. Creating new encrypted files To create new files, Ansible provides a new program, ansible-vault. This program is used to create and interact with Vault encrypted files. The subroutine to create encrypted files is the create subroutine. Lets have a look at the following screenshot: To create a new file, you'll need to know two things ahead of time. The first is the password Vault should use to encrypt the file, and the second is the file name. Once provided with this information, ansible-vault will launch a text editor, whichever editor is defined in the environment variable EDITOR. Once you save the file and exit the editor, ansible-vault will use the supplied password as a key to encrypt the file with the AES256 cipher. All Vault encrypted files referenced by a playbook need to be encrypted with the same key or ansible-playbook will be unable to read them. The ansible-vault program will prompt for a password unless the path to a file is provided as an argument. The password file can either be a plain text file with the password stored as a single line, or it can be an executable file that outputs the password as a single line to standard out. Let's walk through a few examples of creating encrypted files. First, we'll create one and be prompted for a password; then we will provide a password file; and finally we'll create an executable to deliver the password. The password prompt On opening the editor asks for the passphrase, as shown in the following screenshot: Once the passphrase is confirmed, our editor opens and we're able to put content into the file: On my system, the configured editor is vim. Your system may be different, and you may need to set your preferred editor as the value for the EDITOR environment variable. Now we save the file. If we try to read the contents, we'll see that they are in fact encrypted, with a small header hint for Ansible to use later: The password file In order to use ansible-vault with a password file, we first need to create the password file. Simply echoing a password in a file can do this. Then we can reference this file while calling ansible-vault to create another encrypted file: Just as with being prompted for a password, the editor will open and we can write out our data. The password script This last example uses a password script. This is useful for designing a system in which a password can be stored in a central system for storing credentials and shared with contributors to the playbook tree. Each contributor could have their own password to the shared credentials store, from where the Vault password can be retrieved. Our example will be far simpler: just simple output to standard out with a password. This file will be saved as password.sh. The file needs to be marked as an executable for Ansible to treat it as such. Lets have a look at the following screenshot: Encrypting existing files The previous examples all dealt with creating new encrypted files using the create subroutine. But what if we want to take an established file and encrypt it? A subroutine exists for this as well. It is named encrypt and is outlined in the following screenshot: As with create, encrypt expects a password (or password file) and the path to a file. In this case, however, the file must already exist. Let's demonstrate this by encrypting an existing file, a_vars_file.yaml: We can see the file contents before and after the call to encrypt. After the call, the contents are indeed encrypted. Unlike the create subroutine, encrypt can operate on multiple files, making it easy to protect all the important data in one action. Simply list all the files to be encrypted, separated by spaces. Attempting to encrypt already encrypted files will result in an error. Editing encrypted files Once a file has been encrypted with ansible-vault, it cannot be edited directly. Opening the file in an editor would result in the encrypted data being shown. Making any changes to the file would damage the file and Ansible would be unable to read the contents correctly. We need a subroutine that will first decrypt the contents of the file, allow us to edit these contents, and then encrypt the new contents before saving it back to the file. Such a subroutine exists and is called edit. Here is a screenshot showing the available switches: All our familiar options are back, an optional password file/script and the file to edit. If we edit the file we just encrypted, we'll notice that ansible-vault opens our editor with a temporary file as the file path: The editor will save this and ansible-vault will then encrypt it and move it to replace the original file as shown in the following screenshot: Password rotation for encrypted files Over time, as contributors come and go, it is a good idea to rotate the password used to encrypt your secrets. Encryption is only as good as the other layers of protection of the password. ansible-vault provides a subroutine, named rekey, that allows us to change the password as shown here: The rekey subroutine operates much like the edit subroutine. It takes in an optional password file/script and one or more files to rekey. Note that while you can supply a file/script for decryption of the existing files, you cannot supply one for the new passphrase. You will be prompted to input the new passphrase. Let's rekey our even_more_secrets.yaml file: Remember that all the encrypted files need to have a matching key. Be sure to re-key all the files at the same time. Decrypting encrypted files If, at some point, the need to encrypt data files goes away, ansible-vault provides a subroutine that can be used to remove encryption for one or more encrypted files. This subroutine is (unsurprisingly) named decrypt as shown here: Once again, we have an optional argument for a password file/script and then one or more file paths to decrypt. Let's decrypt the file we created earlier using our password file: Executing ansible-playbook with Vault-encrypted files To make use of our encrypted content, we need to be able to inform ansible-playbook how to access any encrypted data it might encounter. Unlike ansible-vault, which exists solely to deal with file encryption/decryption, ansible-playbook is more general purpose and will not assume it is dealing with encrypted data by default. There are two ways to indicate that encrypted data may be encountered. The first is the argument --ask-vault-pass, which will prompt for the vault password required to unlock any encountered encrypted files at the very beginning of a playbook execution. Ansible will hold this provided password in memory for the duration of the playbook execution. The second method is to reference a password file or script via the familiar --vault-password-file argument. Let's create a simple playbook named show_me.yaml that will print out the value of the variable inside of a_vars_file.yaml, which we encrypted in a previous example: --- - name: show me an encrypted var hosts: localhost gather_facts: false vars_files: - a_vars_file.yaml tasks: - name: print the variable debug: var: something The output is as follows: Summary In this article we learnt that Ansible can deal with sensitive data. It is important to understand how this data is stored at rest and how this data is treated when utilized. With a little care and attention, Ansible can keep your secrets secret. Encrypting secrets with ansible-vault can protect them while dormant on your filesystem or in a shared source control repository. Preventing Ansible from logging task data can protect against leaking data to remote log files or on-screen displays. Resources for Article: Further resources on this subject: Blueprinting Your Infrastructure[article] Advanced Playbooks[article] Ansible – An Introduction [article]
Read more
  • 0
  • 0
  • 30375

article-image-how-build-cross-platform-desktop-application-nodejs-and-electron
Mika Turunen
14 Oct 2015
9 min read
Save for later

How to build a cross-platform desktop application with Node.js and Electron

Mika Turunen
14 Oct 2015
9 min read
Do you want to make a desktop application, but you have only mastered web development so far? Or maybe you feel overwhelmed by all of the different API’s that different desktop platforms have to offer? Or maybe you want to write a beautiful application in HTML5 and JavaScript and have it working on the desktop? Maybe you want to port an existing web application to the desktop? Well, luckily for us, there are a number of alternatives and we are going to look into Node.js and Electron to help us get our HTML5 and JavaScript running on the desktop side with no hiccups. What are the different parts in an Electron application Commonly, all of the different components in Electron are either running in the main process (backend) or the rendering process (frontend). The main process can communicate with different parts of the operating system if there’s a need for that, and the rendering process mainly just focuses on showing the content, pretty much like in any HTML5 application you find on the Internet. The processes communicate with each other through IPC (inter-process communication), which in Node.js terms is just a super simple event emitter and nothing else. You can send events and listen for events. You can get the complete source code from here for this post. Let's start working on it You need to have node.js installed and you can install it from https://nodejs.org/. Now that you have Node.js installed you can start focusing on creating the application. First of all, create an empty directory where you will be placing your code. # Open up your favourite terminal, command-line tool or any other alternative as we'll be running quite a bit of commands # Create the directory mkdir /some/location/that/works/in/your/system # Go into the directory cd /some/location/that/works/in/your/system # Now we need to initialize it for our Electron and Node work npm init NPM will start asking you questions about the application we are about to make. You can just hit Enter and not answer any of them if you feel like it. We can fill them in manually once we know a bit more about our application. Now we should have a directory structure with the following files in it: package.json And that's it, nothing else. We'll start by creating two new files in your favorite text editor or IDE. The files are (leave the files empty): main.js index.html Drop all of the files into the same directory as the package.json is in for easier handling of everything for now. Main.js will be our main process file, which is the connecting layer to the underlying desktop operating system for our Electron application. At this point we need to install Electron as a dependency for our application, which is really easy. Just write: npm install --save electron-prebuilt Alternatively if you cloned/downloaded the associated Github repository you can just go into the directory and write: npm install This will install all dependencies from package.json, including the prebuilt-electron. Now we have Electron's prebuilt binaries installed as a direct dependency for our application and we can run our application on our platform. It's wise to manually update our package.json file using the npm init command generated for us. Open up package.josn file and modify the scripts block to look like this (or if it's missing, create it): "main": "main.js", "scripts": { "start": "electron ." }, The whole package.json file should be roughly something like this (taken from the tutorial repo I linked earlier): { "name": "", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "start": "electron ." }, "repository": { }, "keywords": [ ], "author": "", "license": "MIT", "bugs": { }, "homepage": "", "dependencies": { "electron-prebuilt": "^0.25.3" } } The main property in the file points to the main.js and the scripts sections start property tells it to run command "Electron .", which essentially tells Electron to digest the current directory as an application and Electron hardwires the property main as the main process for the application. This means that main.js is now our main process, just like we wanted. Main and rendering process We need to write the main process JavaScript and the rendering process HTML to get our application to start. Let's start with the main process, main.js. You can also find all of the code below from the tutorial repository here. The code has been peppered with a good amount of comments to give a deeper understanding of what is going on in the code and what different parts do in the context of Electron. // Loads Electron specific app that is not commonly available for node or io.js var app = require("app"); // Inter process communication -- Used to communicate from Main process (this) // to the actual rendering process (index.html) -- not really used in this example var ipc = require("ipc"); // Loads the Electron specific module or browser handling var BrowserWindow = require("browser-window"); // Report crashes to our server. var crashReporter = require("crash-reporter"); // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the javascript object is garbage collected var mainWindow = null; // Quit when all windows are closed. app.on("window-all-closed", function() { // OS X specific check if (process.platform != "darwin") { app.quit(); } }); // This event will be called when Electron has done initialization and ready for creating browser windows. app.on("ready", function() { crashReporter.start(); // Create the browser window (where the applications visual parts will be) mainWindow = newBrowserWindow({ width: 800, height: 600 }); // Building the file path to the index.html mainWindow.loadUrl("file://" + __dirname + "/index.html"); // Emitted when the window is closed. // The function just deferences the mainWindow so garbage collection can // pick it up mainWindow.on("closed", function() { mainWindow = null; }); }); You can now start the application, but it'll just start an empty window since we have nothing to render in the rendering process. Let's fix that by populating our index.html with some content. <!DOCTYPE html> <html> <head> <title>Hello Tutorial!</title> </head> <body> <h2>Tutorial</h2> We are using node.js <script>document.write(process.version)</script> and Electron <script>document.write(process.versions["electron"])</script>. </body> </html> Because this is an Electron application we have used the node.js/io.js process and other content relating to the actual node.js/io.js setup we have going. The line document.write(process.version) actually is a call to the Node.js process. This is one of the great things about Electron: we are essentially bridging the gap between the desktop applications and HTML5 applications. Now to run the application. npm start There is a huge list of different desktop environment integration possibilities you can do with Electron and you can read more about them from the Electron documentation at http://electron.atom.io/. Obviously this is still far from a complete application, but this should give you the understanding on how to work with Electron, how it behaves and what you can do with it. You can start using your favorite JavaScript/CSS frontend framework in the index.html to build a great looking GUI for your new desktop application and you can also use all Node.js specific NPM modules in the backend along with the desktop environment integration. Maybe we'll look into writing a great looking GUI for our application with some additional desktop environment integration in another post. Packaging and distributing Electron applications Applications can be packaged into distributable operating system specific containers. For example, .exe files allow them to run on different hardware. The packaging process is fairly simple and well documented in the Electron's documentation and it is out of the scope for this post but worth the look if you want to package your application. To understand more of the application distribution and packaging process, read the Electrons official documentation on it here. Electron and it's current use Electron is still really fresh and right out of GitHub's knowing hands, but it's already been adopted by quite few companies for use and there are number of applications already built on top of it. Companies using Electron: Slack Microsoft Github Applications built with Electron or using Electron Visual Studio Code - Microsofts Visual Studio code Heartdash - Hearthdash is a card tracking application for Hearthstone. Monu - Process monitoring app Kart - Frontend for RetroArch Friends - P2P chat powered by the web Final words on Electron It's obvious that Electron is still taking its first baby steps, but it's hard to deny the fact that more and more user interfaces will be written in different web technologies with HTML5 and this is one of the great starts for it. It'll be interesting to see how the gap between the desktop and the web application develop as time goes on and people like you and me will be playing a key role in the development of future applications. With help of technologies like Electron the desktop application development just got that much easier. For more Node.js content, look no further than our dedicated page! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 21899

article-image-monitoring-physical-network-bandwidth-using-openstack-ceilometer
Packt
14 Oct 2015
9 min read
Save for later

Monitoring Physical Network Bandwidth Using OpenStack Ceilometer

Packt
14 Oct 2015
9 min read
In this article by Chandan Dutta Chowdhury and Sriram Subramanian, author of the book, OpenStack Networking Cookbook, we will explore various means to monitor network resource utilization using Ceilometer. (For more resources related to this topic, see here.) Introduction Due to the dynamic nature of virtual infrastructure and multitenancy, OpenStack administrators need to monitor the resources used by tenants. The resource utilization data can be used to bill the users of a public cloud and to debug infrastructure-related problems. Administrators can also use the utilization reports for better capacity planning. In this post, we will look at OpenStack Ceilometer to meter the physical network resource utilization. Ceilometer components The OpenStack Ceilometer project provides you with telemetry services. It can collect the statistics of resource utilization and also provide threshold monitoring and alarm services. The Ceilometer project is composed of the following components: ceilometer-agent-central: This component is a centralized agent that collects the monitoring statistics by polling individual resources ceilometer-agent-notification: This agent runs on a central server and consumes notification events on the OpenStack message bus ceilometer-collector: This component is responsible for dispatching events to the data stores ceilometer-api: This component provides the REST API server that the Ceilometer clients interact with ceilometer-agent-compute: This agent runs on each compute node and collects resource utilization statistics for the local node ceilometer-alarm-notifier: This component runs on the central server and is responsible for sending alarms ceilometer-alarm-evaluator: This componentmonitors the resource utilization and evaluates the conditions to raise an alarm when the resource utilization crosses predefined thresholds The following diagram describes how the Ceilometer components are installed on a typical OpenStack deployment: Installation An installation of any typical OpenStack service consists of the following steps: Creating a catalog entry in keystone for the service Installing the service packages that provide you with a REST API server to access the service Creating a catalog entry The following steps describe the creation of a catalog entry for the Ceilometer service: An OpenStack service requires a user associated with it for the authorization and authentication. To create a service user, use the following command and provide the password when prompted: openstack user create --password-prompt ceilometer This service user will need to have administrative privileges. Use the following command in order to add an administrative role to the service user: openstack role add --project service --user ceilometer admin Next, we will create a service record for the metering service: openstack service create --name ceilometer --description "Telemetry" metering The final step in the creation of the service catalog is to associate this service with its API endpoints. These endpoints are REST URLs to access the OpenStack service and provide access for public, administrative, and internal use: openstack endpoint create --publicurl http://controller:8777 --internalurl http://controller:8777 --adminurl http://controller:8777 --region RegionOne metering Installation of the service packages Let's now install the packages required to provide the metering service. On the Ubuntu system, use the following command to install the Ceilometer packages: apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central ceilometer-agent-notification ceilometer-alarm-evaluator ceilometer-alarm-notifier python-ceilometerclient Configuration The configuration of an OpenStack service requires the following information: Service database-related details such as the database server address, database name, and login credentials in order to access the database. Ceilometer uses MongoDB to store the resource monitoring data. A MongoDB database user with appropriate access for this service should be created. Conventionally, the user and database name matches the name of the OpenStack service, for example, for the metering service, we will use the database user and database name as ceilometer. A keystone user associated with the service (one that we created in the installation section). The credentials to connect to the OpenStack message bus such as rabbitmq. All these details need to be provided in the service configuration file. The main configuration file for Ceilometer is /etc/ceilometer/ceilometer.conf. The following is a sample template for the Ceilometer configuration. The placeholders mentioned here must be replaced appropriately. CEILOMETER_PASS: This is the password for the Ceilometer service user RABBIT_PASS: This is the message queue password for an OpenStack user CEILOMETER_DBPASS: This is the MongoDB password for the Ceilometer database TELEMETRY_SECRET: This is the secret key used by Ceilometer to sign messages Let's have a look at the following code: [DEFAULT] ... auth_strategy = keystone rpc_backend = rabbit [keystone_authtoken] ... auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = ceilometer admin_password = CEILOMETER_PASS [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [database] ... connection = mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer [service_credentials] ... os_auth_url = http://controller:5000/v2.0 os_username = ceilometer os_tenant_name = service os_password = CEILOMETER_PASS os_endpoint_type = internalURL os_region_name = RegionOne [publisher] ... telemetry_secret = TELEMETRY_SECRET Starting the metering service Once the configuration is done, make sure that all the Ceilometer components are restarted in order to use the updated configuration: # service ceilometer-agent-central restart # service ceilometer-agent-notification restart # service ceilometer-api restart # service ceilometer-collector restart # service ceilometer-alarm-evaluator restart # service ceilometer-alarm-notifier restart The Ceilometer compute agent must be installed and configured on the Compute node to monitor the OpenStack virtual resources. In this article, we will focus on using the SNMP daemon running on the Compute and Network node to monitor the physical resource utilization data. Physical Network monitoring The SNMP daemon must be installed and configured on all the OpenStack Compute and Network nodes for which we intend to monitor the physical network bandwidth. The Ceilometer central agent polls the SNMP daemon on the OpenStack Compute and Network nodes to collect the physical resource utilization data. The SNMP configuration The following steps describe the process of the installation and configuration of the SNMP daemon: To configure the SNMP daemon on the OpenStack nodes, you will need to install the following packages: apt-get install snmpd snmp Update the SNMP configuration file, /etc/snmp/snmpd.conf, to bind the daemon on all the network interfaces: agentAddress udp:161 Update the /etc/snmp/snmpd.conf file to create an SNMP community and to view and provide access to the required MIBs. The following configuration assumes that the OpenStack nodes, where the Ceilometer central agent runs, have addresses in the 192.168.0.0/24 Subnet: com2sec OS_net_sec 192.168.0.0/24 OS_community group OS_Group any OS_net_sec view OS_view included .1 80 access OS_Group "" any noauth 0 OS_view none none Next, restart the SNMP daemon with the updated configuration: service snmpd restart You can verify that the SNMP data is accessible from the Ceilometer central agent by running the following snmpwalk command from the nodes that run the central agent. In this example, 192.168.0.2 is the IP of the Compute node running the SNMP daemon while the snmpwalk command itself is executed on the node running the central agent. You should see a list of SNMP OIDs and their values: snmpwalk -mALL -v2c -cOS_community 192.168.0.2 The Ceilometer data source configuration Once the SNMP configuration is completed on the Compute and Network nodes, a Ceilometer data source must be configured for these nodes. A pipeline defines the transformation of the data collected by Ceilometer. The pipeline definition consists of a data source and sink. A source defines the start of the pipeline and is associated with meters, resources, and a collection interval. It also defines the associated sink. A sink defines the end of the transformation pipeline, transformation applied on the data, and the method used to publish the transformed data. Sending a notification over the message bus is the commonly used publishing mechanism. To use data samples provided by the SNMP daemon on the Compute and Network nodes, we will configure a meter_snmp data source with a predefined meter_sink in the Ceilometer pipeline on the Central node. We will use the OpenStack Controller node to run the central agent. The data source will also map the central agent to the Compute and Network nodes. In the /etc/ceilometer/pipeline.yaml file, add the following lines. This configuration will allow the central agent to poll the SNMP daemon on three OpenStack nodes with the IPs of 192.168.0.2, 192.168.0.3 and 192.168.0.4: - name: meter_snmp interval: 60 resources: - snmp://OS_community@192.168.0.2 - snmp://OS_community@192.168.0.3 - snmp://OS_community@192.168.0.4 meters: - "hardware.cpu*" - "hardware.memory*" - "hardware.disk*" - "hardware.network*" sinks: - meter_sink Restart the Ceilometer central agent in order to load the pipeline configuration change: # service ceilometer-agent-central restart Sample Network usage data With the SNMP daemon configured, the data source in place, and the central agent reloaded, the statistics of the physical network utilization will be collected by Ceilometer. Use the following commands to view the utilization data. ceilometer statistics -m hardware.network.incoming.datagram -q resource=<node-ip> You can use various meters such as hardware.network.incoming.datagram and hardware.network.outgoing.datagram. The following image shows some sample data: All the meters associated with the physical resource monitoring for a host can be viewed using the following command, where 192.168.0.2 is the node IP as defined in the pipeline SNMP source: ceilometer meter-list|grep 192.168.0.2 The following is the output of the preceding command: How does it work The SNMP daemon runs on each Compute and Network node. It provides a measure of the physical resources usage on the local node. The data provided by the SNMP daemon on the various Compute and Network nodes is collected by the Ceilometer central agent. The Ceilometer central agent must be told about the various nodes that are capable of providing an SNMP-based usage sample; this mapping is provided in the Ceilometer pipeline definition. A single central agent can collect the data from multiple nodes or multiple central agents can be used to partition the data collection task with each central agent collecting the data for a few nodes. Summary In this article, we learned the Ceilometer components, its installation and configuration. We also learned about monitoring physical network. You can also refer to the following books by Packt Publishing that can help you build your knowledge on OpenStack: • OpenStack Cloud Computing Cookbook, Second edition by Kevin Jackson and Cody Bunch• Learning OpenStack Networking (Neutron) by James Denton• Implementing Cloud Storage with OpenStack Swift by Amar Kapadia, Sreedhar Varma, and Kris Rajana• VMware vCloud Director Cookbook by Daniel Langenhan Resources for Article: Further resources on this subject: Using the OpenStack Dash-board[article] The orchestration service for OpenStack[article] Setting up VPNaaS in OpenStack [article]
Read more
  • 0
  • 0
  • 9247

article-image-optimizing-netscaler-traffic
Packt
14 Oct 2015
11 min read
Save for later

Optimizing NetScaler Traffic

Packt
14 Oct 2015
11 min read
In this article by Marius Sandbu, author of the book Implementing NetScaler VPX™ - Second Edition, explains the purpose of NetScaler is to act as a logistics department; it serves content to different endpoints using different protocols across different types of media, and it can either be a physical device or a device on top of a hypervisor within a private cloud infrastructure. Since there are many factors are at play here, there is room for tuning and improvement. Some of the topics we will go through in this article are as follows: Tuning for virtual environments Tuning TCP traffic (For more resources related to this topic, see here.) Tuning for virtual environments When setting up a NetScaler in a virtual environment, we need to keep in mind that there are many factors that influence how it will perform, for instance, the underlying CPU of the virtual host, NIC throughput and capabilities, vCPU over allocation, NIC teaming, MTU size, and so on. So, always important to remember the hardware requirements when setting up a NetScaler VPX on a virtualization host. Another important factor to keep in mind when setting up a NetScaler VPX is the concepts of Package Engines. By default, when we set up or import a NetScaler, it is set up with two vCPU. The first of these two is dedicated for management purpose and the second vCPU is dedicated to doing all the packet processing, such as content switching, SSL offloading, ICA-proxy, and so on. It is important to note that the second vCPU might be seen as 100% utilized in the hypervisor performance monitoring tools, but the correct way to check if it is being utilized is by using the CLI command stat system Now, by default, VPX 10 and VPX 200 only have support for one packet engine. This is because of the fact that due to its bandwidth limitations it does not require more packet engine CPUs to process the packets. On the other hand, VPX 1000 and VPX 3000 have support for up to 3 packet engines. This, in most cases, is needed to be able to process all the packets that are going through the system, if the bandwidth is going to be utilized to its fullest. In order to add a new packet engine, we need to assign more vCPUs to the VPX and more memory. Packet engines also have the benefit of load balancing processing between them, so instead of having a vCPU that is 100% utilized, we can even the load between multiple vCPUs and get an even better performance and bandwidth. The following chart shows the different editions and support for multiple packet engines: License/Memory 2 GB 4 GB 6 GB 8 GB 10 GB 12 GB VPX 10 1 1 1 1 1 1 VPX 200 1 1 1 1 1 1 VPX 1000 1 2 3 3 3 3 VPX 3000 1 2 3 3 3 3 It is important for us to remember that multiple PEs are only available for VMware, XenServer, and Hyper-V, but not on KVM. If we plan on using NIC-teaming on the underlying virtualization host, there are some important aspects to consider. Most of the different vendors have guidelines which describe the kinds of load balancing techniques that are available in the hypervisor. For instance, Microsoft has a guide that describes their features. You can find the guide at http://www.microsoft.com/en-us/download/details.aspx?id=30160. One of the NIC teaming options, called Switch Independent Dynamic Mode, has an interesting side effect; it replaces the source MAC address of the virtual machine with the one of the primary NIC on the host, hence we might experience packet loss on a VPX. Therefore, it is recommended in most cases that we have LACP/LAG, or in case of Hyper-V, use the HyperVPort distribution feature instead. Note that the features such as SRV-IO or PCI pass through are not supported for NetScaler VPX. NetScaler 11 also introduced the support for Jumbo Frames for the VPX, which allows for a much higher payload in an Ethernet frame. Instead of the traditional 1500 bytes, we can scale up to 9000 bytes of payload. This allows for a much lower overhead since the frames contain more data. This requires that the underlying NIC on the hypervisor supports this feature and is enabled as well, and this in most cases just work for communication with backend resources and not with users accessing public resources. This is because of the fact that most routers and ISP block such high MTU. This feature can be configured on at the Interface level in NetScaler, which can be done under System | Network | Interface then choose Select Interface and click on Edit. Here, we have an option called Maximum Transmission Unit which can be adjusted up to 9,216 Bytes. It is important to note that NetScaler communicates with backend resources using Jumbo frames, and then adjusts the MTU when communicating back with clients. It can also communicate with Jumbo frames in both paths, in case the NetScaler is set up as a backend load balancer. It is important to note that NetScaler only supports Jumbo frames load balancing for the following protocols: TCP TCP based protocols, like HTTP SIP RADIUS TCP tuning Much of the traffic which is going through NetScaler is based on the TCP protocol. Either it is ICA-Proxy, HTTP and so on. TCP is a protocol that provides reliable, error-checked delivery of packets back and forth. This ensures that data is successfully transferred before being processed further. TCP has many features to adjust bandwidth during transfer, congestion checking, adjusting segment sizing, and so on. We will delve a little into all these features in this section. We can adjust the way NetScaler uses TCP using TCP profiles, by default all services and vServers that are created on the NetScaler uses the default TCP profile nstcp_default_profile. Note that these profiles can be found by navigating to System | Profiles | TCP Profiles. Make sure not to alter the default TCP profile without properly consulting the network team, as this affects the way in which TCP works for all default services on the NetScaler. This default profile has most of the different TCP features turned off; this is to ensure compatibility with most infrastructures. The profile has not been adjusted much since it was first added in NetScaler. Citrix also has a lot of other different profiles depending on the use cases, so we are going to look a bit closer on the different options we have here. For instance, the profile nstcp_default_XA_XD_profile, which is intended for ICA-proxy traffic has some differences from the default profile, which are as follows: Window Scaling Selective Acknowledgement Forward Acknowledgement Use of Nagles Algorithm Window Scaling is a TCP option that allows the receiving point to accept more data than it is allowed in the TCP RFC for window size before getting an acknowledgement. By default, the window size is set to accept 65.536 bytes. With Window scaling enabled, it basically bitwise shifts the window size. This is an option that needs to be enabled on both endpoints in order to be used, and will only be sent in the initial three-way handshake. Select Acknowledgement (SACK) is a TCP option that allows for better handling of TCP retransmission. In a scenario of two hosts communicating where SACK is not enabled, and suddenly one of the hosts drops out of the network and loses some packets when they come back online it receives more packets from the other host. In this case the first host will ACK from the last packet it got from the other host before it dropped out. With SACK enabled, it will notify the other host of the last packet it got before it dropped out, and the other packets it received when he got back online. This allows for faster recovery of the communication, since the other host does not need to resend all the packets. Forward Acknowledgement (FACK) is a TCP option which works in conjunction with SACK and helps avoid TCP congestion by measuring the total number of data bytes that are outstanding in the network. Using the information from SACK, it can more precisely calculate how much data it can retransmit. Nagles Algoritm is a TCP feature that tries to cope with the small packet problem. Applications like Telnet often sends each keystroke within its own packet, creating multiple small packets containing only 1 byte of data, which results in a 41-byte packet for one keystroke. The algorithm works by combining a number of small outgoing messages into the same message, thus avoiding an overhead. Since ICA is a protocol that operates with many small packets, which might create congestion, Nagle is enabled in the TCP profile. Also, since many will be connecting using 3G or Wi-Fi, which might, in some cases, be unreliable to change channel, we need options that require the clients to be able to re-establish a connection quickly, which allows the use of SACK and FACK. Note that Nagle might have negative performance on applications that have their own buffering mechanism and operate inside LAN. If we take a look at another profile like nstcp_default_lan, we can see that FACK is disabled; this is because resources needed to calculate the amount of outstanding data in a high-speed network might be too much. Another important aspect of these profiles is the TCP congestion algorithms. For instance, nstcp_default_mobile uses the Westwood congestion algorithm; this is because it is much better at handling large bandwidth-delay paths, such as the wireless. The following congestion algorithms are available in NetScaler: Default (Based on TCP Reno) Westwood (Based on TCP Westwood+) BIC CUBIC Nile (Based on TCP Illinois) It is worth noting here that Westwood is aimed for 3G/4G connections or other slow wireless connections. BIC is aimed for high bandwidth connections with high latency, such as WAN connections. CUBIC is almost like BIC, but not as aggressive when it comes to fast-ramp and retransmissions. However, it is important to note that CUBIC is the default TCP algorithm in Linux kernels from 2.6.19 to 3.1 Nile is a newly created algorithm created by Citrix, which is based upon TCP Illinois and is targeted at high-speed, long-distance networks. It achieves higher throughput than standard TCP and is also compatible with standard TCP. So, here we can customize the algorithm that is better suited for a service. For instance, if we have a vServer that serves content to mobile devices, then we could use the nstcp_default_mobile TCP profile. There are also some other parameters that are important to keep in mind while working with the TCP profile. One of these parameters is multipath TCP. This is feature which allows endpoint which has multiple paths to a service. It is typically a mobile device which has WLAN and 3G capabilities, and allows the device to communicate with a service on a NetScaler using both channels at the same time. This requires that the device supports communication on both methods and that the service or application on the device supports Multipath TCP. So let's take an example of how a TCP profile might look like. If we have a vServer on NetScaler that is used to service an application to mobile devices. Meaning that the most common way that users can access this service is using 3G or Wi-Fi. The web service has its own buffering mechanism meaning that it tries not to send small packets over the link. The application is Multipath-TCP aware. In this scenario, we can leverage the nstcp_default_mobile profile, since it has most of the defaults for a mobile scenario, but we can also enable multipath TCP and create a new profile of it and bind it to the vServer. In order to bind a TCP profile to a vServer, we have go to a particular vServer | Edit | Profiles | TCP Profiles, as shown in the following screenshot: Note that AOL did a presentation of their own TCP customization on NetScaler; the presentation can be found at http://www.slideshare.net/masonke/net-scaler-tcpperformancetuningintheaolnetwork. It is also important to note that TCP should always be done in cooperation with the network team. Summary In this article, we have learned about Tuning for virtual environments and TCP tuning. Resources for Article: Further resources on this subject: Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box[article] XenMobile™ Solutions Bundle[article] Load balancing MSSQL [article]
Read more
  • 0
  • 0
  • 13890

article-image-attracting-leads-and-building-your-list
Packt
14 Oct 2015
9 min read
Save for later

Attracting Leads and Building Your List

Packt
14 Oct 2015
9 min read
In this article, by Paul Sokol, author of the book Infusionsoft Cookbook, we will learn how to create a Contact Us form. Also, we would learn building a lead magnet delivery. Infusionsoft invented a holistic business strategy and customer experience journey framework named Lifecycle Marketing. There are three phases in Lifecycle Marketing: Attract, Sell, and Wow. This article concerns itself with different tactics to attract and capture leads. Any business can use these recipes in one way or another. How you use them is up to you. Be creative! (For more resources related to this topic, see here.) Creating a Contact Us form Every website needs to have some method for people to make general inquiries. This is particularly important for service-based businesses that operate locally. If a website is missing a simple Contact Us form that means good leads from our hard-earned traffic are slipping away. Fixing this hole in our online presence creates another lead channel for the business. Getting ready We need to edit a new campaign and have some manner of getting a form on our site (either ourselves or via webmaster). How to do it... Drag out a new traffic source, a web form goal, and a sequence. Connect them together as shown in the following image and rename all elements for visual clarity: Double-click on the web form goal to edit its content. Add the following four fields to the form:      First Name      Last Name      Email      Phone (can be left as optional) The following screenshot shows these four fields: Create a custom Text Area field for inquiry comments. Add this custom field to the form using the Other snippet and leave as optional: Click on the Submit button to change the call to action. Change the Button Label button to Please Contact Me and select Center alignment; click on Save. Add a Title snippet above all the fields and provide some instruction for the visitor as follows: Click on the Thank-you Page tab at the top-left side of the page. Remove all elements and replace them with a single Title snippet with a confirmation message for the visitor: Click on the Draft button in the upper-right side of the page to change the form to Ready. Click on Back to Campaign in the upper-left side of the page and open the connected sequence. Drag out a new Task step. Connect and rename it appropriately: Double-click on the Task step and configure it accordingly. Don't forget to merge in any appropriate information or instructions for the end user: Click on the Draft button in the upper-right corner of the page to change the task to Ready. Click on Back to Sequence in the upper-left corner of the page. Click on the Draft button in the upper-right corner of the page to change the sequence to Ready. Click on Back to Campaign in the upper-left corner of the page and publish the campaign. Place the Contact Us form on our website. How it works... When a website visitor fills out the form, a task is created for someone to follow up with that visitor. There's more... For a better experience, add a request received e-mail in the post-form sequence to establish an inbox relationship. Be sure to respect their e-mail preferences as this kind of form submission isn't providing direct consent to be marketed to. This recipe out-of-the-box creates a dead end after the form submission. It is recommended to drive traffic from the thank you page somewhere else to capitalize on visitor momentum because they are very engaged after submitting a form. For example, we could point people to follow us on a particular social network, an FAQ page on our site, or our blog. We can merge any captured information onto the thank you page. Use this to create a personalized experience for your brand voice: We can add/remove form fields based on our needs. Just remember that a Contact Us form is for general inquiries and should be kept simple to reduce conversion friction; the fewer fields the better. If we want to segment inquiries based on their type, we can use a radio button to segment inquiry types without sacrificing a custom field because the form's radio buttons can be used within a decision node directly coming out of the form. See Also For a template similar to this recipe, download the Automate Contact Requests campaign from the Marketplace. Building a lead magnet delivery A lead magnet is exactly what it sounds like: it is something designed to attract new leads like a magnet. Offering some digital resource in exchange for contact information is a common example of a lead magnet. A lead magnet can take many different forms, such as: PDF E-book Slideshow Audio file This is by no means an exhaustive list either. Automating the delivery and follow-up of a lead magnet is a simple and very powerful way to save time and get organized. This recipe shows how to build a mechanism for capturing interested leads, delivering an online lead magnet via e-mail, and following up with people who download it. Getting ready We need to have the lead magnet hosted somewhere publicly that is accessible via a URL and be editing a new campaign. How to do it... Drag out a new traffic source, web form goal, link click goal, and two sequences. Connect them as shown in the following image and rename all elements for visual clarity: Create a campaign link and set it as the public download URL for the lead magnet. Double-click on the web form goal to edit its content. Design the form to include:     A Title snippet with the lead magnet's name and a call to action     A first name and e-mail field     A call to action The form should look as follows: Set a confirmation message driving the visitor to their e-mail on the thank you page: Mark the form as Ready, go Back to Campaign, and open the first sequence. Drag out a new Email step, connect, and rename it appropriately: Double-click on the Email step and write a simple delivery message. Make sure the download link(s) in the e-mail are using the campaign link. For best results, thank the person and tease some of the information contained in the lead magnet: Mark the e-mail as Ready and go Back to Sequence. Mark the Sequence as Ready to go Back to Campaign. Double-click on the link click goal. Check the download link within the e-mail and go Back to Campaign: Open the post-link click goal sequence. Drag out a Delay Timer, an Email step, and connect them accordingly. Configure the Delay Timer to wait 1 day then run in the morning and rename the Email step: Double-click on the Email step and write a simple download follow-up. Make sure it furthers the sales conversation, feels personal, and gives a clear next step: Mark the e-mail as Ready and go Back to Sequence. Mark the sequence as Ready and go Back to Campaign; publish the campaign. Place the lead magnet request form on our website. Promote this new offering across social media to drive some initial traffic. How it works... When a visitor fills out the lead magnet request form, Infusionsoft immediately sends them an e-mail with a download link for the lead magnet. Then, it waits until that person clicks on the download link. When this happens, Infusionsoft waits 1 day then sends a follow-up e-mail addressing the download behavior. There's more... If the lead magnet is less than 10 MB, we can upload it to Infusionsoft's file box and grab a hosted URL from there. If the lead magnet is more than 10 MB, use a cloud-based file-sharing service that offers public URLs such as Dropbox, Google Drive, or Box. Leveraging a campaign link ensures updating the resource is easy; especially if the link is used in multiple places. We can also use a campaign merge field for the lead magnet title to ensure scalability and easy duplication of this campaign. It is important the word Email is present in the form's Submit button. This primes them for inbox engagement and creates clear expectations for what will occur after they request the lead magnet. The download follow-up should get a conversation going and feel really personal. This tactic can bubble up hot clients; people appreciate it when others pay attention to them. For a more personal experience, the lead magnet delivery e-mail(s) can come from the company and the follow-up can come directly from an individual. Not everyone is going to download the lead magnet right away. Add extra reminder e-mails into the mix, one at three days and then one at a week, to ensure those who are genuinely interested don't slip through the cracks. Add a second form on the backend that collects addresses to ship a physical copy if appropriate. This would work well for a physical print of an e-book, a burned CD of an audio file, or a DVD of video content. This builds your direct mail database and helps further segment those who are most engaged and trusting. We can also leverage a second form to collect other information like a phone number or e-mail subscription preferences. Adding an image of the lead magnet to the page containing the request web form can boost conversions. Even if there is never a physical version, there are lots of tools out there to create a digital image of an e-book, CD, report, and more. This recipe is using a web form. We can also leverage a formal landing page at the beginning if desired. Although we can tag those who request the lead magnet, we don't have to because a Campaign Goal Completion report can show us all the people who have submitted the form. We would only need to tag them in instances where the goal completion needs to be universally searchable (for instance, doing an order search via a goal completion tag). Summary In this article, we learned how to make a Contact Us form. We also discussed one of the phases of Lifecycle Marketing (Attract) and learned how to build a lead magnet delivery. Resources for Article: Further resources on this subject: Asynchronous Communication between Components [article] Introducing JAX-RS API [article] The API in Detail [article]
Read more
  • 0
  • 0
  • 1313
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-test-driven-machine-learning
Packt
14 Oct 2015
19 min read
Save for later

Introducing Test-driven Machine Learning

Packt
14 Oct 2015
19 min read
In this article by Justin Bozonier, the author of the book Test Driven Machine Learning, we will see how to develop complex software (sometimes rooted in randomness) in small, controlled steps also it will guide you on how to begin developing solutions to machine learning problems using test-driven development (from here, this will be written as TDD). Mastering TDD is not something the book will achieve. Instead, the book will help you begin your journey and expose you to guiding principles, which you can use to creatively solve challenges as you encounter them. We will answer the following three questions in this article: What are TDD and behavior-driven development (BDD)? How do we apply these concepts to machine learning, and making inferences and predictions? How does this work in practice? (For more resources related to this topic, see here.) After having answers to these questions, we will be ready to move onto tackling real problems. The book is about applying these concepts to solve machine learning problems. This article is the largest theoretical explanation that we will have with the remainder of the theory being described by example. Due to the focus on application, you will learn much more than what you can learn about the theory of TDD and BDD. To read more about the theory and ideals, search the internet for articles written by the following: Kent Beck—The father of TDD Dan North—The father of BDD Martin Fowler—The father of refactoring, he has also created a large knowledge base, on these topics James Shore—one of the author of The Art of Agile Development, has a deep theoretical understanding of TDD, and explains the practical value of it quite well These concepts are incredibly simple and yet can take a lifetime to master. When applied to machine learning, we must find new ways to control and/or measure the random processes inherent in the algorithm. This will come up in this article as well as others. In the next section, we will develop a foundation for TDD and begin to explore its application. Test-driven development Kent Beck wrote in his seminal book on the topic that TDD consists of only two specific rules, which are as follows: Don't write a line of new code unless you first have a failing automated test Eliminate duplication This as he noted fairly quickly leads us to a mantra, really the mantra of TDD: Red, Green, Refactor. If this is a bit abstract, let me restate it that TDD is a software development process that enables a programmer to write code that specifies the intended behavior before writing any software to actually implement the behavior. The key value of TDD is that at each step of the way, you have working software as well as an itemized set of specifications. TDD is a software development process that requires the following: Writing code to detect the intended behavioral change. Rapid iteration cycle that produces working software after each iteration. It clearly defines what a bug is. If a test is not failing but a bug is found, it is not a bug. It is a new feature. Another point that Kent makes is that ultimately, this technique is meant to reduce fear in the development process. Each test is a checkpoint along the way to your goal. If you stray too far from the path and wind up in trouble, you can simply delete any tests that shouldn't apply, and then work your code back to a state where the rest of your tests pass. There's a lot of trial and error inherent in TDD, but the same matter applies to machine learning. The software that you design using TDD will also be modular enough to be able to have different components swapped in and out of your pipeline. You might be thinking that just thinking through test cases is equivalent to TDD. If you are like the most people, what you write is different from what you might verbally say, and very different from what you think. By writing the intent of our code before we write our code, it applies a pressure to our software design that prevents you from writing "just in case" code. By this I mean the code that we write just because we aren't sure if there will be a problem. Using TDD, we think of a test case, prove that it isn't supported currently, and then fix it. If we can't think of a test case, we don't add code. TDD can and does operate at many different levels of the software under development. Tests can be written against functions and methods, entire classes, programs, web services, neural networks, random forests, and whole machine learning pipelines. At each level, the tests are written from the perspective of the prospective client. How does this relate to machine learning? Lets take a step back and reframe what I just said. In the context of machine learning, tests can be written against functions, methods, classes, mathematical implementations, and the entire machine learning algorithms. TDD can even be used to explore technique and methods in a very directed and focused manner, much like you might use a REPL (an interactive shell where you can try out snippets of code) or the interactive (I)Python session. The TDD cycle The TDD cycle consists of writing a small function in the code that attempts to do something that we haven't programmed yet. These small test methods will have three main sections; the first section is where we set up our objects or test data; another section is where we invoke the code that we're testing; and the last section is where we validate that what happened is what we thought would happen. You will write all sorts of lazy code to get your tests to pass. If you are doing it right, then someone who is watching you should be appalled at your laziness and tiny steps. After the test goes green, you have an opportunity to refactor your code to your heart's content. In this context, refactor refers to changing how your code is written, but not changing how it behaves. Lets examine more deeply the three steps of TDD: Red, Green, and Refactor. Red First, create a failing test. Of course, this implies that you know what failure looks like in order to write the test. At the highest level in machine learning, this might be a baseline test where baseline is a better than random test. It might even be predicts random things, or even simpler always predicts the same thing. Is this terrible? Perhaps, it is to some who are enamored with the elegance and artistic beauty of his/her code. Is it a good place to start, though? Absolutely. A common issue that I have seen in machine learning is spending so much time up front, implementing the one true algorithm that hardly anything ever gets done. Getting to outperform pure randomness, though, is a useful change that can start making your business money as soon as it's deployed. Green After you have established a failing test, you can start working to get it green. If you start with a very high-level test, you may find that it helps to conceptually break that test up into multiple failing tests that are the lower-level concerns. I'll dive deeper into this later on in this article but for now, just know that you want to get your test passing as soon as possible; lie, cheat, and steal to get there. I promise that cheating actually makes your software's test suite that much stronger. Resist the urge to write the software in an ideal fashion. Just slap something together. You will be able to fix the issues in the next step. Refactor You got your test to pass through all the manners of hackery. Now, you get to refactor your code. Note that it is not to be interpreted loosely. Refactor specifically means to change your software without affecting its behavior. If you add the if clauses, or any other special handling, you are no longer refactoring. Then you write the software without tests. One way where you will know for sure that you are no longer refactoring is that you've broken previously passing tests. If this happens, we back up our changes until our tests pass again. It may not be obvious but this isn't all that it takes for you to know that you haven't changed behavior. Read Refactoring: Improving the Design of Existing Code, Martin Fowler for you to understand how much you should really care for refactoring. By the way of his illustration in this book, refactoring code becomes a set of forms and movements not unlike karate katas. This is a lot of general theory, but what does a test actually look like? How does this process flow in a real problem? Behavior-driven development BDD is the addition of business concerns to the technical concerns more typical of TDD. This came about as people became more experienced with TDD. They started noticing some patterns in the challenges that they were facing. One especially influential person, Dan North, proposed some specific language and structure to ease some of these issues. Some issues he noticed were the following: People had a hard time understanding what they should test next. Deciding what to name a test could be difficult. How much to test in a single test always seemed arbitrary. Now that we have some context, we can define what exactly BDD is. Simply put, it's about writing our tests in such a way that they will tell us the kind of behavior change they affect. A good litmus test might be asking oneself if the test you are writing would be worth explaining to a business stakeholder. How this solves the previous may not be completely obvious, but it may help to illustrate what this looks like in practice. It follows a structure of given, when, then. Committing to this style completely can require specific frameworks or a lot of testing ceremony. As a result, I loosely follow this in my tests as you will see soon. Here's a concrete example of a test description written in this style Given an empty dataset when the classifier is trained, it should throw an invalid operation exception. This sentence probably seems like a small enough unit of work to tackle, but notice that it's also a piece of work that any business user, who is familiar with the domain that you're working in, would understand and have an opinion on. You can read more about Dan North's point of view in this article on his website at dannorth.net/introducing-bdd/. The BDD adherents tend to use specialized tools to make the language and test result reports be as accessible to business stakeholders as possible. In my experience and from my discussions with others, this extra elegance is typically used so little that it doesn't seem worthwhile. The approach you will learn in the book will take a simplicity first approach to make it as easy as possible for someone with zero background to get up to speed. With this in mind, lets work through an example. Our first test Let's start with an example of what a test looks like in Python. The main reason for using this is that while it is a bit of a pain to install a library, this library, in particular, will make everything that we do much simpler. The default unit test solution in Python requires a heavier set up. On top of this, by using nose, we can always mix in tests that use the built-in solution where we find that we need the extra features. First, install it like this: pip install nose If you have never used pip before, then it is time for you to know that it is a very simple way to install new Python libraries. Now, as a hello world style example, lets pretend that we're building a class that will guess a number using the previous guesses to inform it. This is the first simplest example to get us writing some code. We will use the TDD cycle that we discussed previously, and write our first test in painstaking detail. After we get through our first test and have something concrete to discuss, we will talk about the anatomy of the test that we wrote. First, we must write a failing test. The simplest failing test that I can think of is the following: def given_no_information_when_asked_to_guess_test(): number_guesser = NumberGuesser() result = number_guesser.guess() assert result is None, "Then it should provide no result." The context for assert is in the test name. Reading the test name and then the assert name should do a pretty good job of describing what is being tested. Notice that in my test, I instantiate a NumberGuesser object. You're not missing any steps; this class doesn't exist yet. This seems roughly like how I'd want to use it. So, it's a great place to start with. Since it doesn't exist, wouldn't you expect this test to fail? Lets test this hypothesis. To run the test, first make sure your test file is saved so that it ends in _tests.py. From the directory with the previous code, just run the following: nosetests When I do this, I get the following result: Here's a lot going on here, but the most informative part is near the end. The message is saying that NumberGuesser does not exist yet, which is exactly what I expected since we haven't actually written the code yet. Throughout the book, we'll reduce the detail of the stack traces that we show. For now, we'll keep things detailed to make sure that we're on the same page. At this point, we're in a red state in the TDD cycle. Use the following steps to create our first successful test: Now, create the following class in a file named NumberGuesser.py: class NumberGuesser: """Guesses numbers based on the history of your input"" Import the new class at the top of my test file with a simple import NumberGuesser statement. I rerun nosetests, and get the following: TypeError: 'module' object is not callable Oh whoops! I guess that's not the right way to import the class. This is another very tiny step, but what is important is that we are making forward progress through constant communication with our tests. We are going through extreme detail because I can't stress this point enough; bear with me for the time being. Change the import statement to the following: from NumberGuesser import NumberGuesser Rerun nosetests and you will see the following: AttributeError: NumberGuesser instance has no attribute 'guess' The error message has changed, and is leading to the next thing that needs to be changed. From here, just implement what we think we need for the test to pass: class NumberGuesser: """Guesses numbers based on the history of your input""" def guess(self): return None On rerunning the nosetests, we'll get the following result: That's it! Our first successful test! Some of these steps seem so tiny so as to not being worthwhile. Indeed, overtime, you may decide that you prefer to work on a different level of detail. For the sake of argument, we'll be keeping our steps pretty small if only to illustrate just how much TDD keeps us on track and guides us on what to do next. We all know how to write the code in very large, uncontrolled steps. Learning to code surgically requires intentional practice, and is worth doing explicitly. Lets take a step back and look at what this first round of testing took. Anatomy of a test Starting from a higher level, notice how I had a dialog with Python. I just wrote the test and Python complained that the class that I was testing didn't exist. Next, I created the class, but then Python complained that I didn't import it correctly. So then, I imported it correctly, and Python complained that my guess method didn't exist. In response, I implemented the way that my test expected, and Python stopped complaining. This is the spirit of TDD. You have a conversation between you and your system. You can work in steps as little or as large as you're comfortable with. What I did previously could've been entirely skipped over, and the Python class could have been written and imported correctly the first time. The longer you go without talking to the system, the more likely you are to stray from the path to getting things working as simply as possible. Lets zoom in a little deeper and dissect this simple test to see what makes it tick. Here is the same test, but I've commented it, and broken it into sections that you will see recurring in every test that you write: def given_no_information_when_asked_to_guess_test(): # given number_guesser = NumberGuesser() # when guessed_number = number_guesser.guess() # then assert guessed_number is None, 'there should be no guess.' Given This section sets up the context for the test. In the previous test, you acquired that I didn't provide any prior information to the object. In many of our machine learning tests, this will be the most complex portion of our test. We will be importing certain sets of data, sometimes making a few specific issues in the data and testing our software to handle the details that we would expect. When you think about this section of your tests, try to frame it as Given this scenario… In our test, we might say Given no prior information for NumberGuesser… When This should be one of the simplest aspects of our test. Once you've set up the context, there should be a simple action that triggers the behavior that you want to test. When you think about this section of your tests, try to frame it as When this happens… In our test we might say When NumberGuesser guesses a number… Then This section of our test will check on the state of our variables and any return result if applicable. Again, this section should also be fairly straight-forward, as there should be only a single action that causes a change into your object under the test. The reason for this is that if it takes two actions to form a test, then it is very likely that we will just want to combine the two into a single action that we can describe in terms that are meaningful in our domain. A key example maybe loading the training data from a file and training a classifier. If we find ourselves doing this a lot, then why not just create a method that loads data from a file for us? In the book, you will find examples where we'll have the helper functions help us determine whether our results have changed in certain ways. Typically, we should view these helper functions as code smells. Remember that our tests are the first applications of our software. Anything that we have to build in addition to our code, to understand the results, is something that we should probably (there are exceptions to every rule) just include in the code we are testing. Given, When, Then is not a strong requirement of TDD, because our previous definition of TDD only consisted of two things (all that the code requires is a failing test first and an eliminate duplication). It's a small thing to be passionate about and if it doesn't speak to you, just translate this back into Arrange, act, assert in your head. At the very least, consider it as well as why these specific, very deliberate words are used. Applied to machine learning At this point, you maybe wondering how TDD will be used in machine learning, and whether we use it on regression or classification problems. In every machine learning algorithm, there exists a way to quantify the quality of what you're doing. In the linear regression; it's your adjusted R2 value; in classification problems, it's an ROC curve (and the area beneath it) or a confusion matrix, and more. All of these are testable quantities. Of course, none of these quantities have a built-in way of saying that the algorithm is good enough. We can get around this by starting our work on every problem by first building up a completely naïve and ignorant algorithm. The scores that we get for this will basically represent a plain, old, and random chance. Once we have built an algorithm that can beat our random chance scores, we just start iterating, attempting to beat the next highest score that we achieve. Benchmarking algorithms are an entire field onto their own right that can be delved in more deeply. In the book, we will implement a naïve algorithm to get a random chance score, and we will build up a small test suite that we can then use to pit this model against another. This will allow us to have a conversation with our machine learning models in the same manner as we had with Python earlier. For a professional machine learning developer, it's quite likely that an ideal metric to test is a profitability model that compares risk (monetary exposure) to expected value (profit). This can help us keep a balanced view of how much error and what kind of error we can tolerate. In machine learning, we will never have a perfect model, and we can search for the rest of our lives for the best model. By finding a way to work your financial assumptions into the model, we will have an improved ability to decide between the competing models. Summary In this article, you were introduced to TDD as well as BDD. With these concepts introduced, you have a basic foundation with which to approach machine learning. We saw that the specifying behavior in the form of sentences makes for an easier to ready a set of specifications for your software. Building off of that foundation, we started to delve into testing at a higher level. We did this by establishing concepts that we can use to quantify classifiers: the ROC curve and AUC metric. Now, we've seen that different models can be quantified; it follows that they can be compared. Putting all of this together, we have everything we need to explore machine learning with a test-driven methodology. Resources for Article: Further resources on this subject: Optimization in Python[article] How to do Machine Learning with Python[article] Modeling complex functions with artificial neural networks [article]
Read more
  • 0
  • 0
  • 3494

Packt
14 Oct 2015
9 min read
Save for later

CSS3 – Selectors and nth Rules

Packt
14 Oct 2015
9 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3 Second Edition, we'll look in detail at pseudo classes, selectors such as the :last-child and nth-child, the nth rules and nth-based selection in responsive web design. CSS3 structural pseudo-classes CSS3 gives us more power to select elements based upon where they sit in the structure of the DOM. Let's consider a common design treatment; we're working on the navigation bar for a larger viewport and we want to have all but the last link over on the left. Historically, we would have needed to solve this problem by adding a class name to the last link so that we could select it, like this: <nav class="nav-Wrapper"> <a href="/home" class="nav-Link">Home</a> <a href="/About" class="nav-Link">About</a> <a href="/Films" class="nav-Link">Films</a> <a href="/Forum" class="nav-Link">Forum</a> <a href="/Contact-Us" class="nav-Link nav-LinkLast">Contact Us</a> </nav> This in itself can be problematic. For example, sometimes, just getting a content management system to add a class to a final list item can be frustratingly difficult. Thankfully, in those eventualities, it's no longer a concern. We can solve this problem and many more with CSS3 structural pseudo-classes. The :last-child selector CSS 2.1 already had a selector applicable for the first item in a list: div:first-child { /* Styles */ } However, CSS3 adds a selector that can also match the last: div:last-child { /* Styles */ } Let's look how that selector could fix our prior problem: @media (min-width: 60rem) { .nav-Wrapper { display: flex; } .nav-Link:last-child { margin-left: auto; } } There are also useful selectors for when something is the only item: :only-child and the only item of a type: :only-of-type. The nth-child selectors The nth-child selectors let us solve even more difficult problems. With the same markup as before, let's consider how nth-child selectors allow us to select any link(s) within the list. Firstly, what about selecting every other list item? We could select the odd ones like this: .nav-Link:nth-child(odd) { /* Styles */ } Or, if you wanted to select the even ones: .nav-Link:nth-child(even) { /* Styles */ } Understanding what nth rules do For the uninitiated, nth-based selectors can look pretty intimidating. However, once you've mastered the logic and syntax you'll be amazed what you can do with them. Let's take a look. CSS3 gives us incredible flexibility with a few nth-based rules: nth-child(n) nth-last-child(n) nth-of-type(n) nth-last-of-type(n) We've seen that we can use (odd) or (even) values already in an nth-based expression but the (n) parameter can be used in another couple of ways: As an integer; for example, :nth-child(2) would select the 
second item As a numeric expression; for example, :nth-child(3n+1) would start at 1 and then select every third element The integer based property is easy enough to understand, just enter the element number you want to select. The numeric expression version of the selector is the part that can be a little baffling for mere mortals. If math is easy for you, I apologize for this next section. For everyone else, let's break it down. Breaking down the math Let's consider 10 spans on a page: <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> By default they will be styled like this: span { height: 2rem; width: 2rem; background-color: blue; display: inline-block; } As you might imagine, this gives us 10 squares in a line: OK, let's look at how we can select different ones with nth-based selections. For practicality, when considering the expression within the parenthesis, I start from the right. So, for example, if I want to figure out what (2n+3) will select, I start with the right-most number (the three here indicates the third item from the left) and know it will select every second element from that point on. So adding this rule: span:nth-child(2n+3) { color: #f90; border-radius: 50%; } Which results in this in the browser: As you can see, our nth selector targets the third list item and then every subsequent second one after that too (if there were 100 list items, it would continue selecting every second one). How about selecting everything from the second item onwards? Well, although you could write :nth-child(1n+2), you don't actually need the first number 1 as unless otherwise stated, n is equal to 1. We can therefore just write :nth-child(n+2). Likewise, if we wanted to select every third element, rather than write :nth-child(3n+3), we can just write :nth-child(3n) as every third item would begin at the third item anyway, without needing to explicitly state it. The expression can also use negative numbers, for example, :nth-child(3n-2) starts at -2 and then selects every third item. You can also change the direction. By default, once the first part of the selection is found, the subsequent ones go down the elements in the DOM (and therefore from left to right in our example). However, you can reverse that with a minus. For example: span:nth-child(-2n+3) { background-color: #f90; border-radius: 50%; } This example finds the third item again, but then goes in the opposite direction to select every two elements (up the DOM tree and therefore from right to left in our example): Hopefully, the nth-based expressions are making perfect sense now? The nth-child and nth-last-child differ in that the nth-last-child variant works from the opposite end of the document tree. For example, :nth-last-child(-n+3) starts at 3 from the end and then selects all the items after it. Here's what that rule gives us in the browser: Finally, let's consider :nth-of-type and :nth-last-of-type. While the previous examples count any children regardless of type (always remember the nth-child selector targets all children at the same DOM level, regardless of classes), :nth-of-type and :nth-last-of-type let you be specific about the type of item you want to select. Consider the following markup <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> If we used the selector: .span-class:nth-of-type(-2n+3) { background-color: #f90; border-radius: 50%; } Even though all the elements have the same span-class, we will only actually be targeting the span elements (as they are the first type selected). Here is what gets selected: We will see how CSS4 selectors can solve this issue shortly. CSS3 doesn't count like JavaScript and jQuery! If you're used to using JavaScript and jQuery you'll know that it counts from 0 upwards (zero index based). For example, if selecting an element in JavaScript or jQuery, an integer value of 1 would actually be the second element. CSS3 however, starts at 1 so that a value of 1 is the first item it matches. nth-based selection in responsive web designs Just to close out this little section I want to illustrate a real life responsive web design problem and how we can use nth-based selection to solve it. Let's consider how a horizontal scrolling panel might look in a situation where horizontal scrolling isn't possible. So, using the same markup, let's turn the top 10 grossing films of 2014 into a grid. For some viewports the grid will only be two items wide, as the viewport increases we show three items and at larger sizes still we show four. Here is the problem though. Regardless of the viewport size, we want to prevent any items on the bottom row having a border on the bottom. Here is how it looks with four items wide: See that pesky border below the bottom two items? That's what we need to remove. However, I want a robust solution so that if there were another item on the bottom row, the border would also be removed on that too. Now, because there are a different number of items on each row at different viewports, we will also need to change the nth-based selection at different viewports. For the sake of brevity, I'll show you the selection that matches four items per row (the larger of the viewports). You can view the code sample to see the amended selection at the different viewports. @media (min-width: 55rem) { .Item { width: 25%; } /* Get me every fourth item and of those, only ones that are in the last four items */ .Item:nth-child(4n+1):nth-last-child(-n+4), /* Now get me every one after that same collection too. */ .Item:nth-child(4n+1):nth-last-child(-n+4) ~ .Item { border-bottom: 0; } } You'll notice here that we are chaining the nth-based pseudo-class selectors. It's important to understand that the first doesn't filter the selection for the next, rather the element has to match each of the selections. For our preceding example, the first element has to be the first item of four and also be one of the last four. Nice! Thanks to nth-based selections we have a defensive set of rules to remove the bottom border regardless of the viewport size or number of items we are showing. To learn how to build websites with a responsive and mobile first methodology, allowing a website to display effortlessly on all devices, take a look at Responsive Web Design with HTML5 and CSS3 -Second Edition.
Read more
  • 0
  • 0
  • 11230

article-image-getting-places
Packt
13 Oct 2015
8 min read
Save for later

Getting Places

Packt
13 Oct 2015
8 min read
In this article by Nafiul Islam, the author of Mastering Pycharm, we'll learn all about navigation. It is divided into three parts. The first part is called Omni, which deals with getting to anywhere from any place. The second is called Macro, which deals with navigating to places of significance. The third and final part is about moving within a file and it is called Micro. By the end of this article, you should be able to navigate freely and quickly within PyCharm, and use the right tool for the job to do so. Veteran PyCharm users may not find their favorite navigation tool mentioned or explained. This is because the methods of navigation described throughout this article will lead readers to discover their own tools that they prefer over others. (For more resources related to this topic, see here.) Omni In this section, we will discuss the tools that PyCharm provides for a user to go from anywhere to any place. You could be in your project directory one second, the next, you could be inside the Python standard library or a class in your file. These tools are generally slow or at least slower than more precise tools of navigation provided. Back and Forward The Back and Forward actions allow you to move your cursor back to the place where it was previously for more than a few seconds or where you've made edits. This information persists throughout sessions, so even if you exit the IDE, you can still get back to the positions that you were in before you quit. This falls into the Omni category because these two actions could potentially get you from any place within a file to any place within a file in your directory (that you have been to) to even parts of the standard library that you've looked into as well as your third-party Python packages. The Back and Forward actions are perhaps two of my most used navigation actions, and you can use Keymap. Or, one can simply click on the Navigate menu to see the keyboard shortcuts: Macro The difference between Macro and Omni is subtle. Omni allows you to go to the exact location of a place, even a place of no particular significance (say, the third line of a documentation string) in any file. Macro, on the other hand, allows you to navigate anywhere of significance, such as a function definition, class declaration, or particular class method. Go to definition or navigate to declaration Go to definition is the old name for Navigate to Declaration in PyCharm. This action, like the one previously discussed, could lead you anywhere—a class inside your project or a third party library function. What this action does is allow you to go to the source file declaration of a module, package, class, function, and so on. Keymap is once again useful in finding the shortcut for this particular action. Using this action will move your cursor to the file where the class or function is declared, may it be in your project or elsewhere. Just place your cursor on the function or class and invoke the action. Your cursor will now be directly where the function or class was declared. There is, however, a slight problem with this. If one tries to go to the declaration of a .so object, such as the datetime module or the select module, what one will encounter is a stub file (discussed in detail later). These are helper files that allow PyCharm to give you the code completion that it does. Modules that are .so files are indicated by a terminal icon, as shown here: Search Everywhere The action speaks for itself. You search for classes, files, methods, and even actions. Universally invoked using double Shift (pressing Shift twice in quick succession), this nifty action looks similar to any other search bar. Search Everywhere searches only inside your project, by default; however, one can also use it to search non-project items as well. Not using this option leads to faster search and a lower memory footprint. Search Everywhere is a gateway to other search actions available in PyCharm. In the preceding screenshot, one can see that Search Everywhere has separate parts, such as Recent Files and Classes. Each of these parts has a shortcut next to their section name. If you find yourself using Search Everywhere for Classes all the time, you might start using the Navigate Class action instead which is much faster. The Switcher tool The Switcher tool allows you to quickly navigate through your currently open tabs, recently opened files as well as all of your panels. This tool is essential since you always navigate between tabs. A star to the left indicates open tabs; everything else is a recently opened or edited file. If you just have one file open, Switcher will show more of your recently opened files. It's really handy this way since almost always the files that you want to go to are options in Switcher. The Project panel The Project panel is what I use to see the structure of my project as well as search for files that I can't find with Switcher. This panel is by far the most used panel of all, and for good reason. The Project panel also supports search; just open it up and start typing to find your file. However, the Project panel can give you even more of an understanding of what your code looks similar to if you have Show Members enabled. Once this is enabled, you can see the classes as well as the declared methods inside your files. Note that search works just like before, meaning that your search is limited to only the files/objects that you can see; if you collapse everything, you won't be able to search either your files or the classes and methods in them. Micro Micro deals with getting places within a file. These tools are perhaps what I end up using the most in my development. The Structure panel The Structure panel gives you a bird's eye view of the file that you are currently have your cursor on. This panel is indispensable when trying to understand a project that one is not familiar with. The yellow arrow indicates the option to show inherited fields and methods. The red arrow indicates the option to show field names, meaning if that it is turned off, you will only see properties and methods. The orange arrow indicates the option to scroll to and from the source. If both are turned on (scroll to and scroll from), where your cursor is will be synchronized with what method, field, or property is highlighted in the structure panel. Inherited fields are grayed out in the display. Ace Jump This is my favorite navigation plugin, and was made by John Lindquist who is a developer at JetBrains (creators of PyCharm). Ace Jump is inspired from the Emacs mode with the same name. It allows you to jump from one place to another within the same file. Before one can use Ace Jump, one has to install the plugin for it. Ace Jump is usually invoked using Ctrl or command + ; (semicolon). You can search for Ace Jump in Keymap as well, and is called Ace Jump. Once invoked, you get a small box in which you can input a letter. Choose a letter from the word that you want to navigate to, and you will see letters on that letter pop up immediately. If we were to hit D, the cursor would move to the position indicated by D. This might seem long winded, but it actually leads to really fast navigation. If we wanted to select the word indicated by the letter, then we'd invoke Ace Jump twice before entering a letter. This turns the Ace Jump box red. Upon hitting B, the named parameter rounding will be selected. Often, we don't want to go to a word, but rather the beginning or the end of a line. In order to do this, just hit invoke Ace Jump and then the left arrow for line beginnings or the right arrow for line endings. In this case, we'd just hit V to jump to the beginning of the line that starts with num_type. This is an example, where we hit left arrow instead of the right one, and we get line-ending options. Summary In this article, I discussed some of the best tools for navigation. This is by no means an exhaustive list. However, these tools will serve as a gateway to more precise tools available for navigation in PyCharm. I generally use Ace Jump, Back, Forward, and Switcher the most when I write code. The Project panel is always open for me, with the most used files having their classes and methods expanded for quick search. Resources for Article: Further resources on this subject: Enhancing Your Blog with Advanced Features [article] Adding a developer with Django forms [article] Deployment and Post Deployment [article]
Read more
  • 0
  • 0
  • 13086

article-image-transactions-and-operators
Packt
13 Oct 2015
14 min read
Save for later

Transactions and Operators

Packt
13 Oct 2015
14 min read
In this article by Emilien Kenler and Federico Razzoli, author of the book MariaDB Essentials, he has explained in brief about transactions and operators. (For more resources related to this topic, see here.) Understanding transactions A transaction is a sequence of SQL statements that are grouped into a single logical operation. Its purpose is to guarantee the integrity of data. If a transaction fails, no change will be applied to the databases. If a transaction succeeds, all the statements will succeed. Take a look at the following example: START TRANSACTION; SELECT quantity FROM product WHERE id = 42; UPDATE product SET quantity = quantity - 10 WHERE id = 42; UPDATE customer SET money = money - 0(SELECT price FROM product WHERE id = 42) WHERE id = 512; INSERT INTO product_order (product_id, quantity, customer_id) VALUES (42, 10, 512); COMMIT; We haven't yet discussed some of the statements used in this example. However, they are not important to understand transactions. This sequence of statements occur when a customer (whose id is 512) ordered a product (whose id is 42). As a consequence, we need to execute the following suboperations in our database: Check whether the desired quantity of products is available. If not, we should not proceed Decrease the available quantity of items for the product that is being bought Decrease the amount of money in the online account of our customer Register the order so that the product is delivered to our customer These suboperations form a more complex operation. When a session is executing this operation, we do not want other connections to interfere. Consider the following scenarios: Connection checks how many products with the ID 42 are available. Only one is available, but it is enough. Immediately after, the connection B checks the availability of the same product. It finds that one is available. Connection A decreases the quantity of the product. Now, it is 0. Connection B decreases the same number. Now, it is -1. Both connections create an order. Two persons will pay for the same product; however, only one is available. This is something we definitely want to avoid. However, there is another situation that we want to avoid. Imagine that the server crashes immediately after the customer's money is deducted. The order will not be written to the database, so the customer will end up paying for something he will not receive. Fortunately, transactions prevent both these situations. They protect our database writes in two ways: During a transaction, relevant data is locked or copied. In both these cases, two connections will not be able to modify the same rows at the same time. The writes will not be made effective until the COMMIT command is issued. This means that if the server crashes during the transaction, all the suboperations will be rolled back. We will not have inconsistent data (such as a payment for a product that will not be delivered). In this example, the transaction starts when we issue the START TRANSACTION command. Then, any number of operations can be performed. The COMMIT command makes the changes effective. This does not mean that if a statement fails with an error, the transaction is always aborted. In many cases, the application will receive an error and will be free to decide whether the transaction should be aborted or not. To abort the current transaction, an application can execute the ROLLBACK command. A transaction can consist of only one statement. This perfectly makes sense because the server could crash in the middle of the statement's execution. The autocommit mode In many cases, we don't want to group multiple statements in a transaction. When a transaction consists of only one statement, sending the START TRANSACTION and COMMIT statements can be annoying. For this reason, MariaDB has the autocommit mode. By default, the autocommit mode is ON. Unless a START TRANSACTION command is explicitly used, the autocommit mode causes an implicit commit after each statement. Thus, every statement is executed in a separated transaction by default. When the autocommit mode is OFF, a new transaction implicitly starts after each commit, and the COMMIT command needs be issued explicitly. To turn the autocommit ON or OFF, we can use the @@autocommit server variable as follows: follows: MariaDB [mwa]> SET @@autocommit = OFF; Query OK, 0 rows affected (0.00 sec) MariaDB [mwa]> SELECT @@autocommit; +--------------+ | @@autocommit | +--------------+ | 0 | +--------------+ 1 row in set (0.00 sec) Transaction's limitations in MariaDB Transaction handling is not implemented in the core of MariaDB; instead, it is left to the storage engines. Many storage engines, such as MyISAM or MEMORY, do not implement it at all. Some of the transactional storage engines are: InnoDB; XtraDB; TokuDB. In a sense, Aria tables are partially transactional. Although Aria ignores commands such as START TRANSACTION, COMMIT, and ROLLBACK, each statement is somewhat a transaction. In fact, if it writes, modifies, or deletes multiple rows, the operation completely succeeds or fails, which is similar to a transaction. Only statements that modify data can be used in a transaction. Statements that modify a table structure (such as ALTER TABLE) implicitly commit the current transaction. Sometimes, we may not be sure if a transaction is active or not. Usually, this happens because we are not sure if autocommit is set to ON or not or because we are not sure if the latest statement implicitly committed a transaction. In these cases, the @in_transaction variable can help us. Its value is 1 if a transaction is active and 0 if it is not. Here is an example: MariaDB [mwa]> START TRANSACTION; Query OK, 0 rows affected (0.00 sec) MariaDB [mwa]> SELECT @@in_transaction; +------------------+ | @@in_transaction | +------------------+ | 1 | +------------------+ 1 row in set (0.00 sec) MariaDB [mwa]> DROP TABLE IF EXISTS t; Query OK, 0 rows affected, 1 warning (0.00 sec) MariaDB [mwa]> SELECT @@in_transaction; +------------------+ | @@in_transaction | +------------------+ | 0 | +------------------+ 1 row in set (0.00 sec) InnoDB is optimized to execute a huge number of short transactions. If our databases are busy and performance is important to us, we should try to avoid big transactions in terms of the number of statements and execution time. This is particularly true if we have several concurrent connections that read the same tables. Working with operators In our examples, we have used several operators, such as equals (=), less-than and greater-than (<, >), and so on. Now, it is time to discuss operators in general and list the most important ones. In general, an operator is a sign that takes one or more operands and returns a result. Several groups of operators exist in MariaDB. In this article, we will discuss the main types: Comparison operators; String operators; Logical operators; Arithmetic operators. Comparison operators A comparison operator checks whether there is a certain relation between its operands. If the relationship exists, the operator returns 1; otherwise, it returns 0. For example, let's take the equality operator that is probably used the most: 1 = 1 -- returns 1: the equality relationship exists 1 = 0 -- returns 0: no equality relationship here In MariaDB, 1 and 0 are used in many contexts to indicate whether something is true or false. In fact, MariaDB does not have a Boolean data type, so TRUE and FALSE are merely used as aliases for 1 and 0: TRUE = 1 -- returns 1 FALSE = 0 -- returns 1 TRUE = FALSE -- returns 0 In a WHERE clause, a result of 0 or NULL prevents a row to be shown. All the numeric results other than 0, including negative numbers, are regarded as true in this context. Non-numeric values other than NULL need to be converted to numbers in order to be evaluated by the WHERE clause. Non-numeric strings are converted to 0, whereas numeric strings are treated as numbers. Dates are converted to nonzero numbers.Consider the following example: WHERE 1 -- is redundant; it shows all the rows WHERE 0 -- prevents all the rows from being shown Now, let's take a look at the following MariaDB comparison operators: Operator Description Example = This specifies equality A = B != This indicates inequality A != B <>  This is the synonym for != A <> B <  This denotes less than A < B >  This indicates greater than A > B <= This refers to less than or equals to A <= B >= This specifies greater than or equals to A >= B IS NULL This indicates that the operand is NULL A IS NULL IS NOT NULL The operand is not NULL A IS NOT NULL <=> This denotes that the operands are equal, or both are NULL A <=> B BETWEEN ... AND This specifies that the left operand is within a range of values A BETWEEN B AND C NOT BETWEEN ... AND This indicates that the left operand is outside the specified range A NOT BETWEEN B AND C IN This denotes that the left operand is one of the items in a given list A IN (B, C, D) NOT IN This indicates that the left operand is not in the given list A NOT IN (B, C, D) Here are a couple of examples: SELECT id FROM product WHERE price BETWEEN 100 AND 200; DELETE FROM product WHERE id IN (100, 101, 102); Special attention should be paid to NULL values. Almost all the preceding operators return NULL if any of their operands is NULL. The reason is quite clear, that is, as NULL represents an unknown value, any operation involving a NULL operand returns an unknown result. However, there are some operators specifically designed to work with NULL values. IS NULL and IS NOT NULL checks whether the operand is NULL. The <=> operator is a shortcut for the following code: a = b OR (a IS NULL AND b IS NULL) String operators MariaDB supports certain comparison operators that are specifically designed to work with string values. This does not mean that other operators does not work well with strings. For example, A = B perfectly works if A and B are strings. However, some particular comparisons only make sense with text values. Let's take a look at them. The LIKE operator and its variants This operator is often used to check whether a string starts with a given sequence of characters, if it ends with that sequence, or if it contains the sequence. More generally, LIKE checks whether a string follows a given pattern. Its syntax is: <string_value> LIKE <pattern> The pattern is a string that can contain the following wildcard characters: _ (underscore) means: This specifies any character %: This denotes any sequence of 0 or more characters There is also a way to include these characters without their special meaning: the _ and % sequences represent the a_ and a% characters respectively. For example, take a look at the following expressions: my_text LIKE 'h_' my_text LIKE 'h%' The first expression returns 1 for 'hi', 'ha', or 'ho', but not for 'hey'. The second expression returns 1 for all these strings, including 'hey'. By default, LIKE is case insensitive, meaning that 'abc' LIKE 'ABC' returns 1. Thus, it can be used to perform a case insensitive equality check. To make LIKE case sensitive, the following BINARY keyword can be used: my_text LIKE BINARY your_text The complement of LIKE is NOT LIKE, as shown in the following code: <string_value> NOT LIKE <pattern> Here are the most common uses for LIKE: my_text LIKE 'my%' -- does my_text start with 'my'? my_text LIKE '%my' -- does my_text end with 'my'? my_text LIKE '%my%' -- does my_text contain 'my'? More complex uses are possible for LIKE. For example, the following expression can be used to check whether mail is a valid e-mail address: mail LIKE '_%@_%.__%' The preceding code snippet checks whether mail contains at least one character, a '@' character, at least one character, a dot, at least two characters in this order. In most cases, an invalid e-mail address will not pass this test. Using regular expressions with the REGEXP operator and its variants Regular expressions are string patterns that contain a meta character with special meanings in order to perform match operations and determine whether a given string matches the given pattern or not. The REGEXP operator is somewhat similar to LIKE. It checks whether a string matches a given pattern. However, REGEXP uses regular expressions with the syntax defined by the POSIX standard. Basically, this means that: Many developers, but not all, already know their syntax REGEXP uses a very expressive syntax, so the patterns can be much more complex and detailed REGEXP is much slower than LIKE; this should be preferred when possible The regular expressions syntax is a complex topic, and it cannot be covered in this article. Developers can learn about regular expressions at www.regular-expressions.info. The complement of REGEXP is NOT REGEXP. Logical operators Logical operators can be used to combine truth expressions that form a compound expression that can be true, false, or NULL. Depending on the truth values of its operands, a logical operator can return 1 or 0. MariaDB supports the following logical operators: NOT; AND; OR; XOR The NOT operator NOT is the only logical operator that takes one operand. It inverts its truth value. If the operand is true, NOT returns 0, and if the operand is false, NOT returns 1. If the operand is NULL, NOT returns NULL. Here is an example: NOT 1 -- returns 0 NOT 0 -- returns 1 NOT 1 = 1 -- returns 0 NOT 1 = NULL -- returns NULL NOT 1 <=> NULL -- returns 0 The AND operator AND returns 1 if both its operands are true and 0 in all other cases. Here is an example: 1 AND 1 -- returns 1 0 AND 1 -- returns 0 0 AND 0 -- returns 0 The OR operator OR returns 1 if at least one of its operators is true or 0 if both the operators are false. Here is an example: 1 OR 1 -- returns 1 0 OR 1 -- returns 1 0 OR 0 -- returns 0 The XOR operator XOR stands for eXclusive OR. It is the least used logical operator. It returns 1 if only one of its operators is true or 0 if both the operands are true or false. Take a look at the following example: 1 XOR 1 -- returns 0 1 XOR 0 -- returns 1 0 XOR 1 --returns 1 0 XOR 0 -- returns 0 A XOR B is the equivalent of the following expression: (A OR B) AND NOT (A AND B) Or: (NOT A AND B) OR (A AND NOT B) Arithmetic operators MariaDB supports the operators that are necessary to execute all the basic arithmetic operations. The supported arithmetic operators are: + for additions - for subtractions * for multiplication / for division Depending on the MariaDB configuration, remember that a division by 0 raises an error or returns NULL. In addition, two more operators are useful for divisions: DIV: This returns the integer part of a division without any decimal part or reminder MOD or %: This returns the reminder of a division Here is an example: MariaDB [(none)]> SELECT 20 DIV 3 AS int_part, 20 MOD 3 AS modulus; +----------+---------+ | int_part | modulus | +----------+---------+ | 6 | 2 | +----------+---------+ 1 row in set (0.00 sec) Operators precedence MariaDB does not blindly evaluate the expression from left to right. Every operator has a given precedence. The And operators that is evaluated before another one is said to have a higher precedence. In general, arithmetic and string operators have a higher priority than logical operators. The precedence of arithmetic operators reflect their precedence in common mathematical expressions. It is very important to remember the precedence of logical operators (from the highest to the lowest): NOT AND XOR OR MariaDB supports many operators, and we did not discuss all of them. Also, the exact precedence can slightly vary depending on the MariaDB configuration. The complete precedence can be found in the MariaDB KnowledgeBase, at https://mariadb.com/kb/en/mariadb/documentation/functions-and-operators/operator-precedence/. Parenthesis can be used to force MariaDB to follow a certain order. They are also useful when we do not remember the exact precedence of the operators that we will use, as shown in the following code: (NOT (a AND b)) OR c OR d Summary In this article you learned about the basic transactions and operators. Resources for Article: Further resources on this subject: Set Up MariaDB [Article] Installing MariaDB on Windows and Mac OS X [Article] Building a Web Application with PHP and MariaDB – Introduction to caching [Article]
Read more
  • 0
  • 0
  • 3435
article-image-using-protocols-and-protocol-extensions
Packt
13 Oct 2015
22 min read
Save for later

Using Protocols and Protocol Extensions

Packt
13 Oct 2015
22 min read
In this article by Jon Hoffman, the author of Mastering Swift 2, we'll see how protocols are used as a type, how we can implement polymorphism in Swift using protocols, how to use protocol extensions, and why we would want to use protocol extensions. (For more resources related to this topic, see here.) While watching the presentations from WWDC 2015 about protocol extensions and protocol-oriented programming, I will admit that I was very skeptical. I have worked with object-oriented programming for so long that I was unsure if this new programming paradigm would solve all of the problems that Apple was claiming it would. Since I am not one to let my skepticism get in the way of trying something new, I set up a new project that mirrored the one I was currently working on, but wrote the code using Apple's recommendations for protocol-oriented programming and used protocol extensions extensively in the code. I can honestly say that I was amazed with how much cleaner the new project was compared to the original one. I believe that protocol extensions is going to be one of those defining features that set one programming language apart from the rest. I also believe that many major languages will soon have similar features. Protocol extensions are the backbone for Apple's new protocol-oriented programming paradigm and is arguably one of the most important additions to the Swift programming language. With protocol extensions, we are able to provide method and property implementations to any type that conforms to a protocol. To really understand how useful protocols and protocol extensions are, let's get a better understanding of protocols. While classes, structs, and enums can all conform to protocols in Swift, for this article, we will be focusing on classes and structs. Enums are used when we need to represent a finite number of cases and while there are valid use cases where we would have an enum conform to a protocol, they are very rare in my experience. Just remember that anywhere that we refer to a class or struct, we can also use an enum. Let's begin exploring protocols by seeing how they are full-fledged types in Swift. Protocol as a type Even though no functionality is implemented in a protocol, they are still considered a full-fledged type in the Swift programming language and can be used like any other type. What this means is we can use protocols as a parameter type or a return type in a function. We can also use them as the type for variables, constants, and collections. Let's take a look at some examples. For these few examples, we will use the PersonProtocol protocol: protocol PersonProtocol { var firstName: String {get set} var lastName: String {get set} var birthDate: NSDate {get set} var profession: String {get} init (firstName: String, lastName: String, birthDate: NSDate) } In this first example, we will see how we would use protocols as a parameter type or return type in functions, methods, or initializers: func updatePerson(person: PersonProtocol) -> PersonProtocol { // Code to update person goes here return person } In this example, the updatePerson() function accepts one parameter of the PersonProtocol protocol type and then returns a value of the PersonProtocol protocol type. Now let's see how we can use protocols as a type for constants, variables, or properties: var myPerson: PersonProtocol In this example, we create a variable of the PersonProtocol protocol type that is named myPerson. We can also use protocols as the item type to store in collection such as arrays, dictionaries, or sets: var people: [PersonProtocol] = [] In this final example, we create an array of PersonProtocol protocol types. As we can see from these three examples, even though the PersonProtocol protocol does not implement any functionality, we can still use protocols when we need to specify a type. We cannot, however, create an instance of a protocol. This is because no functionality is implemented in a protocol. As an example, if we tried to create an instance of the PersonProtocol protocol, we would be receiving the error: protocol type 'PersonProtocol' cannot be instantiated error, as shown in the following example: var test = PersonProtocol(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) We can use the instance of any class or struct that conforms to our protocol anywhere that the protocol type is required. As an example, if we defined a variable to be of the PersonProtocol protocol type, we could then populate that variable with any class or struct that conforms to the PersonProtocol protocol. For this example, let's assume that we have two types named SwiftProgrammer and FootballPlayer, which conform to the PersonProtocol protocol: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) print("(myPerson.firstName) (myPerson.lastName)") myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) print("(myPerson.firstName) (myPerson.lastName)") In this example, we start off by creating the myPerson variable of the PersonProtocol protocol type. We then set the variable with an instance of the SwiftProgrammer type and print out the first and last names. Next, we set the myPerson variable to an instance of the FootballPlayer type and print out the first and last names again. One thing to note is that Swift does not care if the instance is a class or struct. It only matters that the type conforms to the PersonProtocol protocol type. Therefore, if our SwiftProgrammer type was a struct and the FootballPlayer type was a class, our previous example would be perfectly valid. As we saw earlier, we can use our PersonProtocol protocol as the type for an array. This means that we can populate the array with instances of any type that conforms to the PersonProtocol protocol. Once again, it does not matter if the type is a class or a struct as long as it conforms to the PersonProtocol protocol. Here is an example of this: var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) In this example, we create an instance of the SwiftProgrammer type and an instance of the FootballPlayer type. We then add both instances to the people array. Polymorphism with protocols What we were seeing in the previous examples is a form of Polymorphism. The word polymorphism comes from the Greek roots Poly, meaning many and morphe, meaning form. In programming languages, polymorphism is a single interface to multiple types (many forms). In the previous example, the single interface was the PersonProtocol protocol and the multiple types were any type that conforms to that protocol. Polymorphism gives us the ability to interact with multiple types in a uniform manner. To illustrate this, we can extend our previous example where we created an array of the PersonProtocol types and loop through the array. We can then access each item in the array using the properties and methods define in the PersonProtocol protocol, regardless of the actual type. Let's see an example of this: for person in people { print("(person.firstName) (person.lastName): (person.profession)") } If we ran this example, the output would look similar to this: Jon Hoffman: Swift Programmer Dan Marino: Football Player We have mentioned a few times in this article that when we define the type of a variable, constant, collection type, and so on to be a protocol type, we can then use the instance of any type that conforms to that protocol. This is a very important concept to understand and it is what makes protocols and protocol extensions so powerful. When we use a protocol to access instances, as shown in the previous example, we are limited to using only properties and methods that are defined in the protocol. If we want to use properties or methods that are specific to the individual types, we would need to cast the instance to that type. Type casting with protocols Type casting is a way to check the type of the instance and/or to treat the instance as a specified type. In Swift, we use the is keyword to check if an instance is a specific type and the as keyword to treat the instance as a specific type. To start with, let's see how we would check the instance type using the is keyword. The following example shows how would we do this: for person in people { if person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } } In this example, we use the if conditional statement to check whether each element in the people array is an instance of the SwiftProgrammer type and if so, we print that the person is a Swift programmer to the console. While this is a good method to check whether we have an instance of a specific class or struct, it is not very efficient if we wanted to check for multiple types. It is a lot more efficient to use the switch statement, as shown in the next example, if we want to check for multiple types: for person in people { switch (person) { case is SwiftProgrammer: print("(person.firstName) is a Swift Programmer") case is FootballPlayer: print("(person.firstName) is a Football Player") default: print("(person.firstName) is an unknown type") } } In the previous example, we showed how to use the switch statement to check the instance type for each element of the array. To do this check, we use the is keyword in each of the case statements in an attempt to match the instance type. We can also use the where statement with the is keyword to filter the array, as shown in the following example: for person in people where person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Now let's look at how we can cast an instance of a class or struct to a specific type. To do this, we can use the as keyword. Since the cast can fail if the instance is not of the specified type, the as keyword comes in two forms: as? and as!. With the as? form, if the casting fails, it returns a nil, and with the as! form, if the casting fails, we get a runtime error; therefore, it is recommended to use the as? form unless we are absolutely sure of the instance type or we perform a check of the instance type prior to doing the cast. Let's look at how we would use the as? keyword to cast an instance of a class or struct to a specified type: for person in people { if let p = person as? SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } } Since the as? keyword returns an optional, we can use optional binding to perform the cast, as shown in this example. If we are sure of the instance type, we can use the as! keyword. The following example shows how to use the as! keyword when we filter the results of the array to only return instances of the SwiftProgrammer type: for person in people where person is SwiftProgrammer { let p = person as! SwiftProgrammer } Now that we have covered the basics of protocols, that is, how polymorphism works and type casting, let's dive into one of the most exciting new features of Swift protocol extensions. Protocol extensions Protocol extensions allow us to extend a protocol to provide method and property implementations to conforming types. It also allows us to provide common implementations to all the confirming types eliminating the need to provide an implementation in each individual type or the need to create a class hierarchy. While protocol extensions may not seem too exciting, once you see how powerful they really are, they will transform the way you think about and write code. Let's begin by looking at how we would use protocol extension with a very simplistic example. We will start off by defining a protocol called DogProtocol as follows: protocol DogProtocol { var name: String {get set} var color: String {get set} } With this protocol, we are saying that any type that conforms to the DogProtocol protocol, must have the two properties of the String type, namely, name and color. Now let's define the three types that conform to this protocol. We will name these types JackRussel, WhiteLab, and Mutt as follows: struct JackRussel: DogProtocol { var name: String var color: String } class WhiteLab: DogProtocol { var name: String var color: String init(name: String, color: String) { self.name = name self.color = color } } struct Mutt: DogProtocol { var name: String var color: String } We purposely created the JackRussel and Mutt types as structs and the WhiteLab type as a class to show the differences between how the two types are set up and to illustrate how they are treated in the same way when it comes to protocols and protocol extensions. The biggest difference that we can see in this example is the struct types provide a default initiator, but in the class, we must provide the initiator to populate the properties. Now let's say that we want to provide a method named speak to each type that conforms to the DogProtocol protocol. Prior to protocol extensions, we would start off by adding the method definition to the protocol, as shown in the following code: protocol DogProtocol { var name: String {get set} var color: String {get set} func speak() -> String } Once the method is defined in the protocol, we would then need to provide an implementation of the method in every type that conform to the protocol. Depending on the number of types that conformed to this protocol, this could take a bit of time to implement. The following code sample shows how we might implement this method: struct JackRussel: DogProtocol { var name: String var color: String func speak() -> String { return "Woof Woof" } } class WhiteLab: DogProtocol { var name: String var color: String init(name: String, color: String) { self.name = name self.color = color } func speak() -> String { return "Woof Woof" } } struct Mutt: DogProtocol { var name: String var color: String func speak() -> String { return "Woof Woof" } } While this method works, it is not very efficient because anytime we update the protocol, we would need to update all the types that conform to it and we may be duplicating a lot of code, as shown in this example. Another concern is, if we need to change the default behavior of the speak() method, we would have to go in each implementation and change the speak() method. This is where protocol extensions come in. With protocol extensions, we could take the speak() method definition out of the protocol itself and define it with the default behavior, in protocol extension. The following code shows how we would define the protocol and the protocol extension: protocol DogProtocol { var name: String {get set} var color: String {get set} } extension DogProtocol { func speak() -> String { return "Woof Woof" } } We begin by defining DogProtocol with the original two properties. We then create a protocol extension that extends DogProtocol and contains the default implementation of the speak() method. With this code, there is no need to provide an implementation of the speak() method in each of the types that conform to DogProtocol because they automatically receive the implementation as part of the protocol. Let's see how this works by setting our three types that conform to DogProtocol back to their original implementations and they should receive the speak() method from the protocol extension: struct JackRussel: DogProtocol { var name: String var color: String } class WhiteLab: DogProtocol { var name: String var color: String init(name: String, color: String) { self.name = name self.color = color } } struct Mutt: DogProtocol { var name: String var color: String } We can now use each of the types as shown in the following code: let dash = JackRussel(name: "Dash", color: "Brown and White") let lily = WhiteLab(name: "Lily", color: "White") let buddy = Mutt(name: "Buddy", color: "Brown") let dSpeak = dash.speak() // returns "woof woof" let lSpeak = lily.speak() // returns "woof woof" let bSpeak = buddy.speak() // returns "woof woof" As we can see in this example, by adding the speak() method to the DogProtocol protocol extension, we are automatically adding that method to all the types that conform to DogProtocol. The speak() method in the DogProtocol protocol extension can be considered a default implementation of the speak() method because we are able to override it in the type implementations. As an example, we could override the speak() method in the Mutt struct, as shown in the following code: struct Mutt: DogProtocol { var name: String var color: String func speak() -> String { return "I am hungry" } } When we call the speak() method for an instance of the Mutt type, it will return the string, "I am hungry". Now that we have seen how we would use protocols and protocol extensions, let's look at a more real-world example. In numerous apps, across multiple platforms (iOS, Android, and Windows), I have had the requirement to validate user input as it is entered. This validation can be done very easily with regular expressions; however, we do not want various regular expressions littered through out our code. It is very easy to solve this problem by creating different classes or structs that contains the validation code; however, we would have to organize these classes to make them easy to use and maintain. Prior to protocol extensions in Swift, I would use protocols to define the validation requirements and then create a struct that would conform to the protocol for each validation that I needed. Let's take a look at this preprotocol extension method. A regular expression is a sequence of characters that define a particular pattern. This pattern can then be used to search a string to see whether the string matches the pattern or contains a match of the pattern. Most major programming languages contain a regular expression parser, and if you are not familiar with regular expressions, it may be worth to learn more about them. The following code shows the TextValidationProtocol protocol that defines the requirements for any type that we want to use for text validation: protocol TextValidationProtocol { var regExMatchingString: String {get} var regExFindMatchString: String {get} var validationMessage: String {get} func validateString(str: String) -> Bool func getMatchingString(str: String) -> String? } In this protocol, we define three properties and two methods that any type that conforms to TextValidationProtocol must implement. The three properties are: regExMatchingString: This is a regular expression string used to verify that the input string contains only valid characters. regExFindMatchString: This is a regular expression string used to retrieve a new string from the input string that contains only valid characters. This regular expression is generally used when we need to validate the input real time, as the user enters information, because it will find the longest matching prefix of the input string. validationMessage: This is the error message to display if the input string contains non-valid characters. The two methods for this protocol are as follows: validateString: This method will return true if the input string contains only valid characters. The regExMatchingString property will be used in this method to perform the match. getMatchingString: This method will return a new string that contains only valid characters. This method is generally used when we need to validate the input real time as the user enters information because it will find the longest matching prefix of the input string. We will use the regExFindMatchString property in this method to retrieve the new string. Now let's see how we would create a struct that conforms to this protocol. The following struct would be used to verify that the input string contains only alpha characters: struct AlphaValidation1: TextValidationProtocol { static let sharedInstance = AlphaValidation1() private init(){} let regExFindMatchString = "^[a-zA-Z]{0,10}" let validationMessage = "Can only contain Alpha characters" var regExMatchingString: String { get { return regExFindMatchString + "$" } } func validateString(str: String) -> Bool { if let _ = str.rangeOfString(regExMatchingString, options: .RegularExpressionSearch) { return true } else { return false } } func getMatchingString(str: String) -> String? { if let newMatch = str.rangeOfString(regExFindMatchString, options: .RegularExpressionSearch) { return str.substringWithRange(newMatch) } else { return nil } } } In this implementation, the regExFindMatchString and validationMessage properties are stored properties, and the regExMatchingString property is a computed property. We also implement the validateString() and getMatchingString() methods within the struct. Normally, we would have several different types that conform to TextValidationProtocol where each one would validate a different type of input. As we can see from the AlphaValidation1 struct, there is a bit of code involved with each validation type. A lot of the code would also be duplicated in each type. The code for both methods (validateString() and getMatchingString()) and the regExMatchingString property would be duplicated in every validation class. This is not ideal, but if we wanted to avoid creating a class hierarchy with a super class that contains the duplicate code (I personally prefer using value types over classes), we would have no other choice. Now let's see how we would implement this using protocol extensions. With protocol extensions we need to think about the code a little differently. The big difference is, we do not need, nor want to define everything in the protocol. With standard protocols or when we use class hierarchy, all the methods and properties that you would want to access using the generic superclass or protocol would have to be defined within the superclass or protocol. With protocol extensions, it is preferred for us not to define a property or method in the protocol if we are going to be defining it within the protocol extension. Therefore, when we rewrite our text validation types with protocol extensions, TextValidationProtocol would be greatly simplified to look similar to this: protocol TextValidationProtocol { var regExFindMatchString: String {get} var validationMessage: String {get} } In original TextValidationProtocol, we defined three properties and two methods. As we can see in this new protocol, we are only defining two properties. Now that we have our TextValidationProtocol defined, let's create the protocol extension for it: extension TextValidationProtocol { var regExMatchingString: String { get { return regExFindMatchString + "$" } } func validateString(str: String) -> Bool { if let _ = str.rangeOfString(regExMatchingString, options: .RegularExpressionSearch) { return true } else { return false } } func getMatchingString(str: String) -> String? { if let newMatch = str.rangeOfString(regExFindMatchString, options: .RegularExpressionSearch) { return str.substringWithRange(newMatch) } else { return nil } } } In the TextValidationProtocol protocol extension, we define the two methods and the third property that were defined in original TextValidationProtocol, but were not defined in the new one. Now that we have created our protocol and protocol extension, we are able to define our text validation types. In the following code, we define three structs that we will use to validate text when a users types it in: struct AlphaValidation: TextValidationProtocol { static let sharedInstance = AlphaValidation() private init(){} let regExFindMatchString = "^[a-zA-Z]{0,10}" let validationMessage = "Can only contain Alpha characters" } struct AlphaNumericValidation: TextValidationProtocol { static let sharedInstance = AlphaNumericValidation() private init(){} let regExFindMatchString = "^[a-zA-Z0-9]{0,15}" let validationMessage = "Can only contain Alpha Numeric characters" } struct DisplayNameValidation: TextValidationProtocol { static let sharedInstance = DisplayNameValidation() private init(){} let regExFindMatchString = "^[\s?[a-zA-Z0-9\-_\s]]{0,15}" let validationMessage = "Display Name can contain only contain Alphanumeric Characters" } In each one of the text validation structs, we create a static constant and a private initiator so that we can use the struct as a singleton. After we define the singleton pattern, all we do in each type is set the values for the regExFindMatchString and validationMessage properties. Now we have not duplicated the code virtually because even if we could, we would not want to define the singleton code in the protocol extension because we would not want to force that pattern on all the conforming types. To use the text validation classes, we would want to create a dictionary object that would map the UITextField objects to the validation class to use it like this: var validators = [UITextField: TextValidationProtocol]() We could then populate the validators dictionary as shown here: validators[alphaTextField] = AlphaValidation.sharedInstance validators[alphaNumericTextField] = AlphaNumericValidation.sharedInstance validators[displayNameTextField] = DisplayNameValidation.sharedInstance We can now set the EditingChanged event of the text fields to a single method named keyPressed(). To set the edition changed event for each field, we would add the following code to the viewDidLoad() method of our view controller: alphaTextField.addTarget(self, action:Selector("keyPressed:"), forControlEvents: UIControlEvents.EditingChanged) alphaNumericTextField.addTarget(self, action: Selector("keyPressed:"), forControlEvents: UIControlEvents.EditingChanged) displayNameTextField.addTarget(self, action: Selector("keyPressed:"), forControlEvents: UIControlEvents.EditingChanged) Now lets create the keyPressed() method that each text field calls when a user types a character into the field: @IBAction func keyPressed(textField: UITextField) { if let validator = validators[textField] where !validator.validateString(textField.text!) { textField.text = validator.getMatchingString(textField.text!) messageLabel?.text = validator.validationMessage } } In this method, we use the if let validator = validators[textField] statement to retrieve the validator for the particular text field and then we use the where !validator.validateString(textField.text!) statement to validate the string that the user has entered. If the string fails validation, we use the getMatchingString() method to update the text in the text field by removing all the characters from the input string, starting with the first invalid character and then displaying the error message from the text validation class. If the string passes validation, the text in the text field is left unchanged. Summary In this article, we saw that protocols are treated as full-fledged types by Swift. We also saw how polymorphism can be implemented in Swift with protocols. We concluded this article with an in-depth look at protocol extensions and saw how we would use them in Swift. Protocols and protocol extensions are the backbone of Apple's new protocol-oriented programming paradigm. This new model for programming has the potential to change the way we write and think about code. While we did not specifically cover protocol-oriented programming in this article, understanding the topics in this article gives us the solid understanding of protocols and protocol extensions needed to learn about this new programming model. Resources for Article: Further resources on this subject: Using OpenStack Swift [Article] Dragging a CCNode in Cocos2D-Swift [Article] Installing OpenStack Swift [Article]
Read more
  • 0
  • 0
  • 18285

article-image-templates-web-pages
Packt
13 Oct 2015
13 min read
Save for later

Templates for Web Pages

Packt
13 Oct 2015
13 min read
In this article, by Kai Nacke, author of the book D Web Development, we will learn that every website has some recurring elements, often called a theme. Templates are an easy way to define these elements only once and then reuse them. A template engine is included in vibe.dwith the so-called Diet templates. The template syntax is based on the Jade templates (http://jade-lang.com/), which you might already know about. In this article, you will learn the following: Why are templates useful Key concepts of Diet templates: inheritance, include and blocks How to use filters and how to create your own filter (For more resources related to this topic, see here.) Using templates Let's take a look at the simple HTML5 page with a header, footer, navigation bar and some content in the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Demo site</title> <link rel="stylesheet" type="text/css" href="demo.css" /> </head> <body> <header> Header </header> <nav> <ul> <li><a href="link1">Link 1</a></li> <li><a href="link2">Link 2</a></li> <li><a href="link3">Link 3</a></li> </ul> </nav> <article> <h1>Title</h1> <p>Some content here.</p> </article> <footer> Footer </footer> </body> </html> The formatting is done with a CSS file, as shown in the following: body { font-size: 1em; color: black; background-color: white; font-family: Arial; } header { display: block; font-size: 200%; font-weight: bolder; text-align: center; } footer { clear: both; display: block; text-align: center; } nav { display: block; float: left; width: 25%; } article { display: block; float: left; } Despite being simple, this page has elements that you often find on websites. If you create a website with more than one page, then you will use this structure on every page in order to provide a consistent user interface. From the 2nd page, you will violate the Don't Repeat Yourself(DRY) principle: the header and footer are the elements with fixed content. The content of the navigation bar is also fixed but not every item is always displayed. Only the real content of the page (in the article block) changes with every page. Templates solve this problem. A common approach is to define a base template with the structure. For each page, you will define a template that inherits from the base template and adds the new content. Creating your first template In the following sections, you will create a Diet template from the HTML page using different techniques. Turning the HTML page into a Diet template Let's start with a one-to-one translation of the HTML page into a Diet template. The syntax is based on the Jade templates. It looks similar to the following: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body header | Header nav ul li a(href='link1') Link 1 li a(href='link2') Link 2 li a(href='link3') Link 3 article h1 Title p Some content here. footer | Footer The template resembles the HTML page. Here are the basic syntax rules for a template: The first word on a line is an HTML tag Attributes of an HTML tag are written as a comma-separated list surrounded by parenthesis A tag may be followed by plain text that may contain the HTML code Plain text on a new line starts with the pipe symbol Nesting of elements is done by increasing the indentation. If you want to see the result of this template, save the code as index.dt and put it together with the demo.css CSS file in the views folder. The Jade templates have a special syntax for the nested elements. The list item/anchor pair from the preceding code could be written in one line, as follows: li: a(href='link1') Link1 This syntax is currently not supported by vibe.d. Now, you need to create a small application to see the result of the template by following the given steps: Create a new project template with dub, using the following command: $ dub init template vibe.d Save the template as the views/index.dt file. Copy the demo.css CSS file in the public folder. Change the generated source/app.d application to the following: import vibe.d; shared static this() { auto router = new URLRouter; router.get("/", staticTemplate!"index.dt"); router.get("*", serveStaticFiles("public/")); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, router); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Run dub inside the project folder to start the application and then browse to http://127.0.0.1:8080/ to see the resulting page. The application uses a new URLRouter class. This class is used to map a URL to a web page. With the router.get("/", staticTemplate!"index.dt");statement, every request for the base URL is responded with rendering of the index.dt template. The router.get("*", serveStaticFiles("public/")); statement uses a wild card to serve all other requests as static files that are stored in the public folder. Adding inheritance Up to now, the template is only a one-to-one translation of the HTML page. The next step is to split the file into two, layout.dt and index.dt. The layout.dtfile defines the general structure of a page while index.dt inherits from this file and adds new content. The key to template inheritance is the definition of a block. A block has a name and contains some template code. A child template may replace the block, append, or prepend the content to a block. In the following layout.dt file, four blocks are defined: header, navigation, content and footer. For all the blocks, except content, a default text is defined, as follows: doctype html html head meta(charset='utf-8') title Demo site link(rel='stylesheet', type='text/css', href='demo.css') body block header header Header block navigation nav ul li <a href="link1">Link 1</a> li <a href="link2">Link 2</a> li <a href="link3">Link 3</a> block content block footer footer Footer The template in the index.dt file inherits this layout and replaces the block content, as shown here: extends layout block content article h1 Title p Some content here. You can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. You can now add more pages and reuse the layout. It is also possible to change the common elements that you defined in the header, footer and navigation blocks. There is no restriction on the level of inheritance. This allows you to construct very sophisticated template systems. Using include Inheritance is not the only way to avoid repetition of template code. With the include keyword, you insert the content of another file. This allows you to put the reusable template code in separate files. As an example, just put the following navigation in a separate navigation.dtfile: nav     ul         li <a href="link1">Link 1</a>         li <a href="link2">Link 2</a>         li <a href="link3">Link 3</a> The index.dt file uses the include keyword to insert the navigation.dt file, as follows: doctype html html     head        meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body         header Header         include navigation         article             h1 Title             p Some content here.         footer Footer Just as with the inheritance example, you can put both the files into the views folder and run dub again. The rendered page in your browser still looks the same. The Jade templates allow you to apply a filter to the included content. This is not yet implemented. Integrating other languages with blocks and filters So far, the templates only used the HTML content. However, a web application usually builds on a bunch of languages, most often integrated in a single document, as follows: CSS styles inside the style element JavaScript code inside the script element Content in a simplified markup language such as Markdown Diet templates have two mechanisms that are used for integration of other languages. If a tag is followed by a dot, then the block is treated as plain text. For example, the following template code: p. Some text And some more text It translates into the following: <p>     Some text     And some more text </p> The same can also be used for scripts and styles. For example, you can use the following script tag with the JavaScript code in it: script(type='text/javascript').     console.log('D is awesome') It translates to the following: <script type="text/javascript"> console.log('D is awesome') </script> An alternative is to use a filter. You specify a filter with a colon that is followed by the filter name. The script example can be written with a filter, as shown in the following: :javascript     console.log('D is awesome') This is translated to the following: <script type="text/javascript">     //<![CDATA[     console.log('D is aewsome')     //]]> </script> The following filters are provided by vibe.d: javascript for JavaScript code css for CSS styles markdown for content written in Markdown syntax htmlescapeto escape HTML symbols The css filter works in the same way as the javascript filter. The markdown filter accepts the text written in the Markdown syntax and translates it into HTML. Markdown is a simplified markup language for web authors. The syntax is available on the internet at http://daringfireball.net/projects/markdown/syntax. Here is our template, this time using the markdown filter for the navigation and the article content: doctype html html     head         meta(charset='utf-8')         title Demo site         link(rel='stylesheet', type='text/css', href='demo.css')     body        header Header         nav             :markdown                 - [Link 1](link1)                 - [Link 2](link2)                 - [Link 3](link3)         article             :markdown                 Title                 =====                 Some content here.         footer Footer The rendered HTML page is still the same. The advantage is that you have less to type, which is good if you produce a lot of content. The disadvantage is that you have to remember yet another syntax. A normal plain text block can contain HTML tags, as follows: p. Click this <a href="link">link</a> This is rendered as the following: <p>     Click this <a href="link">link</a> </p> There are situations where you want to even treat the HTML tags as plain text, for example, if you want to explain HTML syntax. In this case, you use the htmlescape filter, as follows: p     :htlmescape         Link syntax: <a href="url target">text to display</a> This is rendered as the following: <p>     Link syntax: &lt;a href="url target"&gt;text to display&lt;/a&gt; </p> You can also add your owns filters. The registerDietTextFilter()function is provided by vibe.d to register the new filters. This function takes the name of the filter and a pointer to the filter function. The filter function is called with the text to filter and the indentation level. It returns the filtered text. For example, you can use this functionality for pretty printing of D code, as follows: Create a new project with dub,using the following command: $ dub init filter vibe.d Create the index.dttemplate file in the viewsfolder. Use the new dcodefilter to format the D code, as shown in the following: doctype html head title Code filter example :css .keyword { color: #0000ff; font-weight: bold; } body p You can create your own functions. :dcode T max(T)(T a, T b) { if (a > b) return a; return b; } Implement the filter function in the app.dfile in the sourcefolder. The filter function outputs the text inside a <pre> tag. Identified keywords are put inside the <span class="keyword"> element to allow custom formatting. The whole application is as follows: import vibe.d; string filterDCode(string text, size_t indent) { import std.regex; import std.array; auto dst = appender!string; filterHTMLEscape(dst, text, HTMLEscapeFlags.escapeQuotes); auto regex = regex(r"(^|s)(if|return)(;|s)"); text = replaceAll(dst.data, regex, "$1<span class="keyword">$2</span>$3"); auto lines = splitLines(text); string indent_string = "n"; while (indent-- > 0) indent_string ~= "t"; string ret = indent_string ~ "<pre>"; foreach (ln; lines) ret ~= indent_string ~ ln; ret ~= indent_string ~ "</pre>"; return ret; } shared static this() { registerDietTextFilter("dcode", &filterDCode); auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; listenHTTP(settings, staticTemplate!"index.dt"); logInfo("Please open http://127.0.0.1:8080/ in your browser."); } Compile and run this application to see that the keywords are bold and blue. Summary In this article, we have seen how to create a Diet template using different techniques such as translating the HTML page into a Diet template, adding inheritance, using include, and integrating other languages with blocks and filters. Resources for Article: Further resources on this subject: MODx Web Development: Creating Lists [Article] MODx 2.0: Web Development Basics [Article] Ruby with MongoDB for Web Development [Article]
Read more
  • 0
  • 0
  • 4721

article-image-using-tiled-map-editor
Packt
13 Oct 2015
5 min read
Save for later

Using the Tiled map editor

Packt
13 Oct 2015
5 min read
LibGDX is a game framework and not a game engine. This is why it doesn't have any editor to place the game objects or to make levels. Tiled is a 2D level/map editor well-suited for this purpose. In this article by Indraneel Potnis, the author of LibGDX Cross-platform Development Blueprints, we will learn how to draw objects and create animations. LibGDX has an excellent support for rendering and reading maps/levels made through Tiled. (For more resources related to this topic, see here.) Drawing objects Sometimes, simple tiles may not satisfy your requirements. You might need to create objects with complex shapes. You can define these shape outlines in the editor easily. The first thing you need to do is create an object layer. Go to Layer | Add Object Layer: You will notice that a new layer has been added to the Layers pane called Object Layer 1. You can rename it if you like: With this layer selected, you can see the object toolbar getting enabled: You can draw basic shapes, such as a rectangle or an ellipse/circle: You can also draw a polygon and a polyline by selecting the appropriate options from the toolbar. Once you have added all the edges, click on the right mouse button to stop drawing the current object: Once the polygon/polyline is drawn, you can edit it by selecting the Edit Polygons option from the toolbar: After this, select the area that encompasses your polygon in order to change to the edit mode. You can edit your polygons/polylines now: You can also add custom properties to your polygons by right-clicking on them and selecting Object Properties: You can then add custom properties as mentioned previously: You can also add tiles as an object. Click on the Insert Tile icon in the toolbar: Once you select this, you can insert tiles as objects into the map. You will observe that the tiles can be placed anywhere now, irrespective of the grid boundaries: To select and move multiple objects, you can select the Select Objects option from the toolbar: You can then select the area that encompasses the objects. Once they are selected, you can move them by dragging them with your mouse cursor: You can also rotate the object by dragging the indicators at the corners after they are selected: Tile animations and images Tiled allows you to create animations in the editor. Let's make an animated shining crystal. First, we will need an animation sheet of the crystal. I am using this one, which is 16 x 16 pixels per crystal: The next thing we need to do is add this sheet as a tileset to the editor and name it crystals. After you add the tileset, you can see a new tab in the Tilesets pane: Go to View | Tile Animation Editor to open the animation editor: A new window will open that will allow you to edit the animations: On the right-hand side, you will see the individual animation frames that make up the animation. This is the animation tileset, which we added. Hold Ctrl on your keyboard, and select all of them with your mouse. Then, drag them to the left window: The numbers beside the images indicate the amount of time each image will be displayed in milliseconds. The images are displayed in this order and repeat continuously. In this example, every image will be shown for 100ms or 1/10th of a second. In the bottom-left corner, you can preview the animation you just created. Click on the Close button. You can now see something like this in the Tilesets pane: The first tile represents the animation, which we just created. Select it, and you can draw the animation anywhere in the map. You can see the animation playing within the map: Lastly, we can also add images to our map. To use them, we need to add an image layer to our map. Go to Layer | Add Image Layer. You will notice that a new layer has been added to the Layers pane. Rename it House: To use an image, we need to set the image's path as a property for this layer. In the Properties pane, you will find a property called Image. There is a file picker next to it where you can select the image you want: Once you set the image, you can use it to draw on the map:   Summary In this article, we learned about a tool called Tiled, and we also learned how to draw various objects and make tile animations and add images. Carry on with LibGDX Cross-platform Development Blueprints to learn how to develop great games, such as Monty Hall Simulation, Whack a Mole, Bounce the Ball, and many more. You can also take a look at the vast array of LibGDX titles from Packt Publishing, a few among these are as follows: Learning Libgdx Game Development, Andreas Oehlke LibGDX Game Development By Example, James Cook LibGDX Game Development Essentials, Juwal Bose Resources for Article:   Further resources on this subject: Getting to Know LibGDX [article] Using Google's offerings [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 19074
article-image-distributing-your-html5-game
Marco Stagni
12 Oct 2015
11 min read
Save for later

Distributing your HTML5 Game

Marco Stagni
12 Oct 2015
11 min read
So, you've just completed your awesome project, and your game is ready to be played: it features tons of awesome characters and effects, a beautiful plot, an incredible soundtrack, and it sparkles at night. The only problem you're facing is: how should I distribute my game? How do I reach my players? Fortunately, this is an awesome world, and there are plenty of opportunities for you to reach millions of players. But before you are even able to reach the world, you need to package your game. For "packaging" your game, I will cover both the integration inside of a web page, and the actual packaging inside of a standalone application. In the first case, the only thing you need to do is be able to host your HTML page wherever you want, making it accessible for everybody in the world. Then, you will spread the world through every channel you know, and hopefully your website will be reached by a lot of interested people. If you used an HTML5 Game Engine (something like this ), then you should have a lot of tools to easily embed your game inside of an HTML page. Even if you're using a much more evolved game engine, such as Unity3D, you will always be able to incorporate the game into your website (of course, the only thing you need is to make sure that your chosen game engine should support this feature). In the Unity3D case, the user is prompted to download (or give access to, if already downloaded) the Unity3D Web Plugin, which allows the user to play the game with no problems at all. If you don't own a website, or you don't want to use your personal web space to distribute your product, there are a lot of available alternatives out there. Using an external service So, as I said, there are a lot of services that give you the ability to publish and distribute your game, both in standalone application, and as a "normal" HTML page. I will introduce and explain the most famous ones, since they're the most important platforms, and they're full of potential players. Itch.io Itch.io is a world famous platform and marketplace for indie games. It has a complete web interface, and unlike Steam, itch.io doesn't provide a standalone client, and users have to download game files directly from the platform. It accepts any kind of games, like mobile, desktop and web games. Itch.io can be considered as a game marketplace, since every developer is able to set a minimum price for his/her product: every player has the ability to choose the amount they thinks that game deserves. This can be seen as downside of this platform, but you have to consider that you can set a minimum price, and giving this ability to players cash spread the knowledge of your game more easily. Every developer can freely upload his/her content with no restrictions (except usual restrictions, like the ones concerning copyright infringment), with no entrance fee and with a fast profile creation procedure. Itch.io takes a 10% fee on developers' revenue (which is an absolutely amazing ratio) and allows developers to self manage their products, even allowing pre-orders if your game is still in development. Another awesome feature is that you don't need an account to play hosted games. This makes Itch.io a very interesting platform for those interested in selling a game. Another alternative, yet very similar to Itch.io, is gamejolt.com. GameJolt has basically the same features as Itch.io, although players are forced to create an account in order to play. An interesting feature of Game Jolt is that shares advertisements revenue with developers. Gog.com Gog.com is a more restrictive platform, since it only accepts a few games each month, but if you're one of the lucky guys whose work has been accepted, you will get a lot of promotion from the platform itself: your new release will be publicly shown on social media channels, and a great number of players will surely reach your game. This platform tries to help developers build their own community with on-site forums: every game has its own space inside the platform, where players and developers can share thoughts and discuss topics such as new releases or bugs to be fixed. Another interesting feature is that Gog.com helps you reach the end of your development path: they give you the possibility of an advance in royalties, to help your game reach its final stage. Finally, we can talk about the most famous and popular game distribution service of all time, Steam. Steam Steam is famous. Almost very player in the world knows it, and even if you don't download anything from the platform, its name is always recognizable. Steam provides a huge platform where developers and teams from all over the world can upload their works (both games and softwares), to get an income. Steam splits the revenue with a 70/30 model, which is considered a pretty fair ratio. The actual process of game uploading is called Steam GreenLight. Its main feature is being the most famous and biggest game marketplace of the world, and it provides its own client, available for Mac, Windows and Linux. The only difference from itch.io, gog.com and gamejolt.com is that only standalone applications are submittable. Building a standalone application Since this is still a beautiful world with a lot of possibilities, we can now build our own standalone game in a very easy way. At the end of the process I'm about to explain, you will obtain a single program that will contain your game, and you will be able to submit it to Steam or your favorite marketplace. The technology we're about to use is called "node-webkit", and you can find a lot of useful resources almost everywhere on the web about it. What is node-webkit? Node-webkit is like a browser that only loads your HTML content. You can think about it as a very small and super simplified version of Google Chrome, with the same amazing features. What makes node-webkit an incredible piece of software is that it's based on Chromium and Webkit, and seamlessly fully integrates Node.js, so you can access Node.js while interacting with your browser (this means that almost every node.js module you already use is available for you in node-webkit). What is more important for you, is that node-webkit is cross-platform: this means you can build your game for Windows, OS X and Linux very easily. Creating your first node-webkit application So, you now know that node-webkit is a powerful tool, and you will certainly be able to deploy your game on Steam GreenLight. The question is, how do you build a game? I will now show you what is a simple node-webkit application structure, and how to build your game for each platform. Project structure Node-webkit requires a very simple structure. The only requirements are a "package.json" file and your "index.html". So, if your project structure contains a index.html file, with all of its static assets (scripts, stylesheets, images), you have to provide a "package.json" near the index page. A very basic structure for this file should be something like this:    { "name":"My Awesome Game", "description":"My very first game, built with amazing software", "author":"Walter White", "main":"index.html", "version":"1.0.0", "nodejs":true, "window":{ "title":"Awesomeness 3D", "width":1280, "height":720, "toolbar":false, "position":"center" } } It's very easy to understand what this package.json file says: it's telling node-webkit how to render our window, which file is the main one, and provides a few fields describing our application (its name, its author, its description and so on). The node-webkit wiki provides a very detailed description of the right way to write your package.json file, here. When you're still in debugging mode, setting the "toolbar" option to "true" is a great deal: you will gain access to the Chrome Developers Tool. So, now you have your completed game. To start the packaging process, you have to zip all your files inside a .nw package, like this:    zip -r -X app.nw . -x *.git* This will recursively zip your game inside a package called "app.nw", removing all the git related files (I assume you're using git, if you're not using it you can simplify the command and run zip -r -X app.nw .). We now need to download the node-webkit executable, in order to create our own executable. Every platform needs its own path to be completed, and each one will be explained separately. OS X Since the OS X version of node-webkit is an .app package, the operations you need to follow are very simple. First of all, right-click the .app file, and select Show Package Contents. What you have to do right now, is to copy and paste your "app.nw" package inside Contents/Resources . If you now try to launch the .app file, your game will start immediately. You're now free to customize the application you created: you can change the Info.plist file to match the properties of your game. Be careful of what you change, or you might be forced to start from the beginning. You can user your favorite text editor. You can also update the application icon by replacing the nw.icns file inside Contents/Resources. Your game is now ready to run on Mac OS X. Linux Linux is also very easy to setup. First of all, after you download the Linux version of node-webkit, you have to unzip all of the files and copy your "node-webkit" package (your "app.nw") inside the root of the folder you just unzipped. Now, simply prompt using this command: cat nw app.nw > app chmod +x app And launch it using: ./app To distribute your linux package, zip the executable along with a few required files: nw.pak libffmpegsumo.so credits.html The last one is legally required due to multiple third-party open source libraries used by node-webkit. Windows The last one, Windows. First of all, download the Windows version of node-webkit and unzip it. Copy your app.nw file inside the folder, which contains the nw.exe file. Using the Windows command line tool (CMD), prompt this command: copy /b nw.exe+app.nw app.exe You can now launch your created .exe file to see your game running. To distribute your game in the easiest way, you can zip your executable with a few required files: nw.pak ffmpegsumo.dll icudtl.dat libEGL.dll libGLESv2.dll Your Windows package is ready to be distributed. You can use a powerful tool available to create Windows Installers, which is WixEdit. Conclusions Now you have your own executable, ready to be downloaded and played by millions of players. Your game is available for almost every platform (I won't cover the details on how to prepare your game for mobile, and since it's a pretty detailed topic, it will probably need another post), both online and offline. It's always been completely up to you decide where your game will live, but now you have a ton of multiple choices: you can decide to upload your HTML content to an online platform, or to your own website, pack it in a standalone application, and distribute it via any channel you know to whoever you desire. Your other options is to upload it to a marketplace like Steam. This is pretty much everything you need to know in order to get started. The purpose of this post was to give you a set of information to help you decide which distribution way is the most suitable for you: there are still a lot of alternatives, and the ones described here are probably the most famous. I hope your games will reach and engage every player in this world, and I wish you all the success you deserve. About the Author Marco Stagni is a Italian frontend and mobile developer, with a Bachelor's Degree in Computer Engineering. He's completely in love with JavaScript, and he's trying to push his knowledge of the language in every possible direction. After a few years as frontend and Android developer, working both with Italian Startups and Web Agencies, he's now deepening his knowledge about Game Programming. His efforts are currently aimed to the completion of his biggest project: a Javascript Game Engine, built on top of THREE.js and Physijs (the project is still in alpha version, but already downloadable via http://npmjs.org/package/wage. You can follow him also on twitter @marcoponds or on github at http://github.com/marco-ponds. Marco is a big NBA fan.
Read more
  • 0
  • 0
  • 6279

article-image-running-firefox-os-simulators-webide
Packt
12 Oct 2015
9 min read
Save for later

Running Firefox OS Simulators with WebIDE

Packt
12 Oct 2015
9 min read
In this article by Tanay Pant, the author of the book, Learning Firefox OS Application Development, you will learn how to use WebIDE and its features. We will start by installing Firefox OS simulators in the WebIDE so that we can run and test Firefox OS applications in it. Then, we will study how to install and create new applications with WebIDE. Finally, we will cover topics such as using developer tools for applications that run in WebIDE, and uninstalling applications in Firefox OS. In brief, we will go through the following topics: Getting to know about WebIDE Installing Firefox OS simulator Installing and creating new apps with WebIDE Using developer tools inside WebIDE Uninstalling applications in Firefox OS (For more resources related to this topic, see here.) Introducing WebIDE It is now time to have a peek at Firefox OS. You can test your applications in two ways, either by running it on a real device or by running it in Firefox OS Simulator. Let's go ahead with the latter option since you might not have a Firefox OS device yet. We will use WebIDE, which comes preinstalled with Firefox, to accomplish this task. If you haven't installed Firefox yet, you can do so from https://www.mozilla.org/en-US/firefox/new/. WebIDE allows you to install one or several runtimes (different versions) together. You can use WebIDE to install different types of applications, debug them using Firefox's Developer Tools Suite, and edit the applications/manifest using the built-in source editor. After you install Firefox, open WebIDE. You can open it by navigating to Tools | Web Developer | WebIDE. Let's now take a look at the following screenshot of WebIDE: You will notice that on the top-right side of your window, there is a Select Runtime option. When you click on it, you will see the Install Simulator option. Select that option, and you will see a page titled Extra Components. It presents a list of Firefox OS simulators. We will install the latest stable and unstable versions of Firefox OS. We installed two versions of Firefox OS because we would need both the latest and stable versions to test our applications in the future. After you successfully install both the simulators, click on Select Runtime. This will now show both the OS versions listed, as shown in the following screenshot:. Let's open Firefox OS 3.0. This will open up a new window titled B2G. You should now explore Firefox OS, take a look at its applications, and interact with them. It's all HTML, CSS and JavaScript. Wonderful, isn't it? Very soon, you will develop applications like these:` Installing and creating new apps using WebIDE To install or create a new application, click on Open App in the top-left corner of the WebIDE window. You will notice that there are three options: New App, Open Packaged App, and Open Hosted App. For now, think of Hosted apps like websites that are served from a web server and are stored online in the server itself but that can still use appcache and indexeddb to store all their assets and data offline, if desired. Packaged apps are distributed in a .zip format and they can be thought of as the source code of the website bundled and distributed in a ZIP file. Let's now head to the first option in the Open App menu, which is New App. Select the HelloWorld template, enter Project Name, and click on OK. After completing this, the WebIDE will ask you about the directory where you want to store the application. I have made a new folder named Hello World for this purpose on the desktop. Now, click on Open button and finally, click again on the OK button. This will prepare your app and show details, such as Title, Icon, Description, Location and App ID of your application. Note that beneath the app title, it says Packaged Web. Can you figure out why? As we discussed, it is because of the fact that we are not serving the application online, but from a packaged directory that holds its source code. This covers the right-hand side panel. In the left-hand side panel, we have the directory listing of the application. It contains an icon folder that holds different-sized icons for different screen resolutions It also contains the app.js file, which is the engine of the application and will contain the functionality of the application; index.html, which will contain the markup data for the application; and finally, the manifest.webapp file, which contains crucial information and various permissions about the application. If you click on any filename, you will notice that the file opens in an in-browser editor where you can edit the files to make changes to your application and save them from here itself. Let's make some edits in the application— in app.js and index.html. I have replaced World with Firefox everywhere to make it Hello Firefox. Let's make the same changes in the manifest file. The manifest file contains details of your application, such as its name, description, launch path, icons, developer information, and permissions. These details are used to display information about your application in the WebIDE and Firefox Marketplace. The manifest file is in JSON format. I went ahead and edited developer information in the application as well, to include my name and my website. After saving all the files, you will notice that the information of the app in the WebIDE has changed! It's now time to run the application in Firefox OS. Click on Select Runtime and fire up Firefox OS 3.0. After it is launched, click on the Play button in the WebIDE hovering on which is the prompt that says Install and Run. Doing this will install and launch the application on your simulator! Congratulations, you installed your first Firefox OS application! Using developer tools inside WebIDE WebIDE allows you to use Firefox's awesome developer tools for applications that run in the Simulator via WebIDE as well. To use them, simply click on the Settings icon (which looks like a wrench) beside the Install and Run icon that you had used to get the app installed and running. The icon says Debug App on hovering the cursor over it. Click on this to reveal developer tools for the app that is running via WebIDE. Click on Console, and you will see the message Hello Firefox, which we gave as the input in console.log() in the app.js file. Note that it also specifies the App ID of our application while displaying Hello Firefox. You may have noticed in the preceding illustration that I sent a command via the console alert('Hello Firefox'); and it simultaneously executed the instruction in the app running in the simulator. As you may have noticed, Firefox OS customizes the look and feel of components, such as the alert box (this is browser based). Our application is running in an iframe in Gaia. Every app, including the keyboard application, runs in an iframe for security reasons. You should go through these tools to get a hang of the debugging capabilities if you haven't done so already! One more important thing that you should keep in mind is that inline scripts (for example, <a href="#" onclick="alert(this)">Click Me</a>) are forbidden in Firefox OS apps, due to Content Security Policy (CSP) restrictions. CSP restrictions include the remote scripts, inline scripts, javascript URIs, function constructor, dynamic code execution, and plugins, such as Flash or Shockwave. Remote styles are also banned. Remote Web Workers and eval() operators are not allowed for security reasons and they show 400 error and security errors respectively upon usage. You are warned about CSP violations when submitting your application to the Firefox OS Marketplace. CSP warnings in the validator will not impact whether your app is accepted into the Marketplace. However, if your app is privileged and violates the CSP, you will be asked to fix this issue in order to get your application accepted. Browsing other runtime applications You can also take a look at the source code of the preinstalled/runtime apps that are present in Firefox OS or Gaia, to be precise. For example, the following is an illustration that shows how to open them: You can click on the Hello World button (in the same place where Open App used to exist), and this will show you the whole list of Runtime Apps as shown in the preceding illustration. I clicked on the Camera application and it showed me the source code of its main.js file. It's completely okay if you are daunted by the huge file. If you find these runtime applications interesting and want to contribute to them, then you can refer to Mozilla Developer Network's articles on developing Gaia, which you can find at https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia. Our application looks as follows in the App Launcher of the operating system: Uninstalling applications in Firefox OS You can remove the project from WebIDE by clicking on the Remove Project button in the home page of the application. However, this will not uninstall the application from Firefox OS Simulator. The uninstallation system of the operating system is quite similar to iOS. You just have to double tap in OS X to get the Edit screen, from where you can click on the cross button on the top-left of the app icon to uninstall the app. You will then get a confirmation screen that warns you that all the data of the application will also be deleted along with the app. This will take you back to the Edit screen where you can click on Done to get back to the home screen. Summary In this article, you learned about WebIDE, how to install Firefox OS simulator in WebIDE, using Firefox OS and installing applications in it, and creating a skeleton application using WebIDE. You then learned how to use developer tools for applications that run in the simulator, browsing other preinstalled runtime applications present in Firefox OS. Finally, you learned about removing a project from WebIDE and uninstalling an application from the operating system. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [Article] Introducing Web Application Development in Rails [Article] One-page Application Development [Article]
Read more
  • 0
  • 0
  • 14147
Modal Close icon
Modal Close icon