Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-unboxing-docker
Packt
22 Jan 2015
10 min read
Save for later

Unboxing Docker

Packt
22 Jan 2015
10 min read
In this article by Shrikrishna Holla, author of the book Orchestrating Docker, in this article, you will learn how to install Docker on various systems, both in development and in production. For Linux-based systems, since a kernel is already available, installation is as simple as the apt-get install or yum install commands. However, to run Docker on non-Linux operating systems such as OSX and Windows, you will need to install a helper application developed by Docker Inc., called Boot2Docker. This will install a lightweight Linux VM on VirtualBox, which will make Docker available through port 2375, assigned by the Internet Assigned Numbers Authority (IANA). You will have installed Docker on your system, be it in development or production, and verified it. This article explains: Introducing Docker Installing Docker Ubuntu (14.04 and 12.04) Mac OSX and Windows (For more resources related to this topic, see here.) Docker was developed by DotCloud Inc. (Currently Docker Inc.), as the framework they built their Platform as a Service (PaaS) upon. When they found increasing developer interest in the technology, they released it as open source and have since announced that they will completely focus on the Docker technology's development, which is good news as it means continual support and improvement for the platform. There have been many tools and technologies aimed at making distributed applications possible, even easy to set up, but none of them have as wide an appeal as Docker does, which is primarily because of its cross-platform nature and friendliness towards both system administrators and developers. It is possible to set up Docker in any OS, be it Windows, OSX, or Linux, and Docker containers work the same way everywhere. This is extremely powerful, as it enables a write-once-run-anywhere workflow. Docker containers are guaranteed to run the same way, be it on your development desktop, a bare-metal server, virtual machine, data center, or cloud. No longer do you have the situation where a program runs on the developer's laptop but not on the server. The nature of the workflow that comes with Docker is such that developers can completely concentrate on building applications and getting them running inside the containers, whereas sysadmins can work on running the containers in deployment. This separation of roles and the presence of a single underlying tool to enable it simplifies the management of code and the deployment process. But don't virtual machines already provide all of these features? Virtual Machines (VMs) are fully virtualized. This means that they share minimal resources amongst themselves and each VM has its own set of resources allocated to it. While this allows fine-grained configuration of the individual VMs, minimal sharing also translates into greater resource usage, redundant running processes (an entire operating system needs to run!), and hence a performance overhead. Docker, on the other hand, builds on a container technology that isolates a process and makes it believe that it is running on a standalone operating system. The process still runs in the same operating system as its host, sharing its kernel. It uses a layered copy-on-write filesystem called Another Unionfs (UFS), which shares common portions of the operating system between containers. Greater sharing, of course, can only mean less isolation, but vast improvements in Linux process's resource management solutions such as namespaces and cgroups have allowed Docker to achieve VM-like sandboxing of processes and yet maintain a very small resource footprint. Installing Docker Docker is available in the standard repositories of most major Linux distributions. We will be looking at the installation procedures for Docker in Ubuntu 14.04 and 12.04 (Trusty and Precise), Mac OSX, and Windows. If you are currently using an operating system not listed above, you can look up the instructions for your operating system at https://docs.docker.com/installation/#installation. Installing Docker in Ubuntu Docker is supported by Ubuntu from Ubuntu 12.04 onwards. Remember that you still need a 64-bit operating system to run Docker. Let's take a look at the installation instructions for Ubuntu 14.04. Installing Docker in Ubuntu Trusty 14.04 LTS Docker is available as a package in the Ubuntu Trusty release's software repositories under the name of docker.io: $ sudo apt-get update $ sudo apt-get -y install docker.io That's it! You have now installed Docker onto your system. However, since the command has been renamed docker.io, you will have to run all Docker commands with docker.io instead of docker. The package is named docker.io because it conflicts with another KDE3/GNOME2 package called docker. If you rather want to run commands as docker, you can create a symbolic link to the /usr/local/bin directory. The second command adds autocomplete rules to bash: $ sudo ln -s /usr/bin/docker.io /usr/local/bin/docker $ sudo sed -i '$acomplete -F _docker docker' > /etc/bash_completion.d/docker.io Installing Docker in Ubuntu Precise 12.04 LTS Ubuntu 12.04 comes with an older kernel (3.2), which is incompatible with some of the dependencies of Docker. So we will have to upgrade it: $ sudo apt-get update $ sudo apt-get -y install linux-image-generic-lts-raring linux-headers-generic-lts-raring $ sudo reboot The kernel that we just installed comes with AUFS built in, which is also a Docker requirement. Now let's wrap up the installation: $ curl -s https://get.docker.io/ubuntu/ | sudo sh This is a curl script for easy installation. Looking at the individual pieces of this script will allow us to understand the process better: First, the script checks whether our Advanced Package Tool (APT) system can deal with https URLs, and installs apt-transport-https if it cannot: # Check that HTTPS transport is available to APT if [ ! -e /usr/lib/apt/methods/https ]; then apt-get update apt-get install -y apt-transport-https fi Then it will add the Docker repository to our local key chain: $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9 You may receive a warning that the package isn't trusted. Answer yes to continue the installation. Finally, it adds the Docker repository to the APT sources list, and updates and installs the lxc-docker package: $ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main /etc/apt/sources.list.d/docker.list" $ sudo apt-get update $ sudo apt-get install lxc-docker Docker versions before 0.9 had a hard dependency on LXC (Linux Containers) and hence couldn't be installed on VMs hosted on OpenVZ. But since 0.9, the execution driver has been decoupled from the Docker core, which allows us to use one of numerous isolation tools such as LXC, OpenVZ, systemd-nspawn, libvirt-lxc, libvirt-sandbox, qemu/kvm, BSD Jails, Solaris Zones, and even chroot! However, it comes by default with an execution driver for Docker's own containerization engine, called libcontainer, which is a pure Go library that can access the kernel's container APIs directly, without any other dependencies. To use any other containerization engine, say LXC, you can use the-e flag, like so: $ docker -d -e lxc. Now that we have Docker installed, we can get going at full steam! There is one problem though: software repositories like APT are usually behind times and often have older versions. Docker is a fast-moving project and a lot has changed in the last few versions. So it is always recommended to have the latest version installed. Upgrading Docker You can upgrade Docker as and when it is updated in the APT repositories. An alternative (and better) method is to build from source. It is recommended to upgrade to the newest stable version as the newer versions might contain critical security updates and bug fixes. Also, the examples in this book assume a Docker version greater than 1.0, whereas Ubuntu's standard repositories package a much older version. Mac OSX and Windows Docker depends on the Linux kernel, so we need to run Linux in a VM and install and use Docker through it. Boot2Docker is a helper application built by Docker Inc. that installs a VM containing a lightweight Linux distribution made specifically to run Docker containers. It also comes with a client that provides the same Application Program Interface (API) as that of Docker, but interfaces with the docker daemon running in the VM, allowing us to run commands from within the OSX/Windows terminal. To install Boot2Docker, carry out the following steps: Download the latest release of Boot2Docker for your operating system from http://boot2docker.io/. The installation image is shown as follows: Run the installer, which will install VirtualBox and the Boot2Docker management tool. Run Boot2docker. The first run will ask you for a Secure Shell (SSH) key passphrase. Subsequent runs of the script will connect you to a shell session in the virtual machine. If needed, the subsequent runs will initialize a new VM and start it. Alternately, to run Boot2Docker, you can also use the terminal command boot2docker: $ boot2docker init # First run $ boot2docker start $ export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375 You will have to run boot2docker init only once. It will ask you for an SSH key passphrase. This passphrase is subsequently used by boot2docker ssh to authenticate SSH access. Once you have initialized Boot2Docker, you can subsequently use it with the boot2docker start and boot2docker stop commands. DOCKER_HOST is an environment variable that, when set, indicates to the Docker client the location of the docker daemon. A port forwarding rule is set to the boot2Docker VM's port 2375 (where the docker daemon runs). You will have to set this variable in every terminal shell you want to use Docker in. Bash allows you to insert commands by enclosing subcommands within `` or $(). These will be evaluated first and the result will be substituted in the outer commands. If you are the kind that loves to poke around, the Boot2Docker default user is docker and the password is tcuser. The boot2Docker management tool provides several commands: $ boot2docker Usage: boot2docker [<options>] {help|init|up|ssh|save|down|poweroff|reset| restart|config|status|info|ip|delete|download|version} [<args>] When using boot2Docker, the DOCKER_HOST environment variable has to be available in the terminal session for Docker commands to work. So, if you are getting the Post http:///var/run/docker.sock/v1.12/containers/create: dial unix /var/run/docker.sock: no such file or directory error, it means that the environment variable is not assigned. It is easy to forget to set this environment variable when you open a new terminal. For OSX users, to make things easy, add the following line to your .bashrc or .bash_profile shells: alias setdockerhost='export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375' Now, whenever you open a new terminal or get the above error, just run the following command: $ setdockerhost This image shows how the terminal screen will look like when you have logged into the Boot2Docker VM. Summary I hope you got hooked to Docker. Docker technology will take you into the Docker world and try to dazzle you with its awesomeness. In this article, you learned some history and some basics on Docker and how it works. We saw how it is different from and advantageous over VM. Then we proceeded to install Docker on our development setup, be it Ubuntu, Mac, or Windows. Now you can pat your self on the back and proceed with Docker technology. Resources for Article: Further resources on this subject: Managing Heroku from the Command Line [article] Target Exploitation [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 2150

article-image-lets-get-started-active-di-rectory
Packt
22 Jan 2015
11 min read
Save for later

Let's Get Started with Active Di-rectory

Packt
22 Jan 2015
11 min read
In this article by Uma Yellapragada, author of the book Active Directory with PowerShell, we will see how the Powershell cmdlets and modules are used for managing Active Directory. (For more resources related to this topic, see here.) Welcome to managing Active Directory using PowerShell. There are lot of good books from Packt Publishing that you might want to refer to improve your PowerShell skills. Assuming that you know the basics of PowerShell, this book further helps you to manage Active Directory using PowerShell. Do not worry if you are not familiar with PowerShell. You can still make use of the content in this book because most of the one-liners quoted in this book are self-explanatory. This chapter will take you through some of the essential tools that are required for managing Active Directory using PowerShell: The Microsoft Active Directory PowerShell module The Quest Active Directory PowerShell module Native PowerShell cmdlets Details of how to get these tools, install, and configure them are also provided in this chapter. The content in this book completely relies on these tools to query Active Directory, so it is important to install and configure them before you proceed with further chapters in this book. Though you can install and use these tools on legacy operating systems such as Windows XP, Windows 7, Windows Server 2003, Windows Server 2003 R2, Windows Server 2008, Windows Server 2008 R2, and so on, we will focus mostly on using them on the latest versions of operating systems, such as Windows 8.1 and Windows Server 2012 R2. Most of the operations performed on Windows 8.1 and Windows Server 2012 work on its predecessors. Any noticeable differences will be highlighted as far as possible. Another reason for using the latest versions of operating systems for demonstration is the features list that they provide. When the Microsoft Active Directory PowerShell module was initially introduced with Windows Server 2008 R2, it came with 76 cmdlets. In Windows Server 2012, the number of cmdlets increased from 76 to 135. Similarly, the Windows Server 2012 R2 release has 147 Active Directory cmdlets. Looking at this pattern, it is clear that Microsoft is focusing on bringing more and more functionality into the Active Directory PowerShell module with its new releases. This means the types of actions we can perform with the Microsoft Active Directory module are increasing. Because of these reasons, Windows 8.1 and Windows Server 2012 R2 are being used for demonstration so that you can learn more about managing Active Directory using PowerShell. To see how many cmdlets a module has, use the following commands once you have the Active Directory PowerShell module installed using the approach that is discussed later in this chapter: Import-Module ActiveDirectory First, import the Active Directory module in a PowerShell window. You will see a progress bar as shown in the following screenshot: Once the module is imported, then you can run the following command to verify how many cmdlets Active Directory module has: (Get-Command -Module ActiveDirectory).Count As you can see in the following screenshot, there are 147 cmdlets available in Active Directory module on a Windows Server 2012 R2 server: Ways to automate Active Directory operations Active Directory operations can be automated in different ways. You can use C#, VB, command line tools (such as dsquery), VBScript, PowerShell, Perl, and so on. Since this book focuses on using PowerShell, let's examine the methodologies that are widely used to automate Active Directory operations using PowerShell. There are three ways available to manage Active Directory using PowerShell. Each of these has its own advantages and operating environments: The Microsoft Active Directory module The Quest Active Directory PowerShell cmdlets The native method of PowerShell Let's dig into each of these and understand a bit more in terms of how to install, configure, and use them. The Microsoft Active Directory module As the name indicates, this PowerShell module is developed and supported by Microsoft itself. This module contains a group of cmdlets that you can use to manage Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS). The Microsoft Active Directory module is introduced with the Windows Server 2008 R2 operating system and you need to have at least this version of OS to make use of the module. This module comes as an optional feature on Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 and gets installed by default when you install the AD DS or AD LDS server roles, or when you promote them as domain controllers. You can have this module installed on Windows 7 or Windows 8 by installing the Remote Server Administration Tools (RSAT) feature. This module works by querying Active Directory through a service called Active Directory Web Services (ADWS), which is available in Windows Server 2008 R2 or later operating systems. This means your domain should have at least one domain controller with an operating system such as Windows Server 2008 R2 or above to make the module work. Don't get disappointed if none of your domain controllers are upgraded to Windows Server 2008 R2. Microsoft has released a component called Active Directory Management Gateway Service that runs as the Windows Server 2008 R2 ADWS service and provides the same functionality on Windows Server 2003 or Windows Server 2008 domain controllers. You can read more about ADWS and gateway service functionality at http://technet.microsoft.com/en-us/library/dd391908(v=ws.10).aspx Installing Active Directory As mentioned earlier, if you promote a Windows Server 2008 R2 or later operating system to domain controller, there is no need to install this module explicitly. It comes with the domain controller installation process. Installing Active Directory module on Windows 7, Windows 8, and Windows 8.1 is a two-step process. First, we need to install the Remote Server Administration Tool (RSAT) kit for the respective operating system; then we enable the Active Directory module, which is part of RSAT, as a second step. Installing the Remote Server Administration Tool kit First, download the RSAT package from one of the following links based on your operating system and install it with administrative privileges: RSATfor Windows 8.1 http://www.microsoft.com/en-us/download/details.aspx?id=39296 RS AT for Windows 8 http://www.microsoft.com/en-us/download/details.aspx?id=28972 RSAT for Windows 7 with SP1 http://www.microsoft.com/en-us/download/details.aspx?id=7887 Installing the Active Directory module Once the RSAT package is installed, you need to enable Remote Server Administration Tools | Role Administration Tools | AD DS and AD LDS Tools | Active Directory module for Windows PowerShell via the Turn Windows features on or off wizard that you will find in the Control Panel of the Windows 7 or Windows 8 operating systems. To install Active Directory module on Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 member servers, there is no need to install additional components. They are already part of the available features and it's just a matter of adding the feature to the operating system. This can be done using PowerShell or a regular GUI approach. If you want to enable this feature using PowerShell in the aforementioned server operating systems, then use the following commands: Import-Module ServerManager Add-WindowsFeature RSAT-AD-PowerShell The RSAT package comes with the build on Windows Server 2008 R2 and Windows Server 2012. No need to install RSAT explicitly. The Server Manager PowerShell module in these operating systems contains the cmdlet, Add-WindowsFeature, which is used for installing features. In this case, we are installing Active Directory module for the Windows PowerShell feature in the AD DS and AD LDS tools. If you want to perform this installation on remote servers, you can use the PSRemoting feature in PowerShell. This is the best approach if you want to deploy Active Directory module on all your servers in your environment. This Active Directory module for Windows PowerShell can be installed using GUI interface as well. You need to use Server Manager to add Active Directory Module for Windows PowerShell using the Add Roles and Features Wizard as shown in following screenshot: Testing the functionality After installation, you can verify the functionality of Active Directory module by importing it and running a few basic cmdlets. A cmdlet is a simple command that is used in the Windows PowerShell environment. You can read more about cmdlets at http://msdn.microsoft.com/en-us/library/ms714395(v=vs.85).aspx. Your installation is successful if you see your domain information after running the Get-ADDomain cmdlet, as shown in the following: Import-Module ActiveDirectory Get-ADDomain One good thing about PowerShell is you can avoid the hassle of typing the whole command in the PowerShell window by using the Tab Expansion feature. You can type part of the command and press the Tab key to autocomplete it. If there are multiple commands (or cmdlets) that match the string you typed, then use Tab multiple times to select the one you need. It's pretty handy because some of the cmdlets in Active Directory are considerably long and it can get really frustrating to type them. Refer to the TechNet page at http://technet.microsoft.com/en-us/library/dd315316.aspx in order to understand how you can use this feature of PowerShell. Quest Active Directory PowerShell cmdlets Previously, you learned that Microsoft Active Directory (MS AD) module was introduced with Windows Server 2008 R2. So, how did system administrators manage their Active Directory environments before the introduction of MS AD module? Quest Active Directory PowerShell cmdlets were present at that time to simplify AD operations. This Quest module has a bunch of cmdlets to perform various operations in Active Directory. Even after Microsoft released Active Directory module, many people still use Quest AD cmdlets because of its simplicity and the wide variety of management options it provides. Quest AD module is part of the Quest ActiveRoles Server product, which is used for managing Active Directory objects. This Quest AD module is also referred to as ActiveRoles Management Shell for Active Directory because it is an integral part of the ActiveRoles product. Installing Quest Quest software (now acquired by Dell) allows you to download ActiveRoles Management Shell for free and you can download a copy from https://support.software.dell.com/download-install-detail/5024645. You will find two versions of Quest AD Management Shell in the download page. Be sure to download the latest one: v1.6.0. While trying to install the MSI, you might get a prompt saying Microsoft .NET Framework 3.5 Service Pack 1 or later is required. You will experience this even if you have .NET framework 4.0 installed on your computer. It seems the MSI is specifically looking for .NET 3.5 SP1. So, ensure that you have .NET Framework 3.5 SP1 installed before you start installing the Quest AD management Shell MSI. You might want to refer to the TechNet article at http://technet.microsoft.com/en-us/library/dn482071.aspx to understand NET Framework 3.5 installation process on Windows Server 2012 R2. After the completion of MSI, you can start using this module in two ways. You can either search in Program Files for the application with the name ActiveRoles Management Shell for Active Directory or you can add the Quest snap-in into the regular PowerShell window. It's preferred to add the snap-in directly into existing PowerShell windows rather than opening a new Quest AD Shell when you want to manage Active Directory using Quest cmdlets. Also if you are authoring any scripts based on Quest AD cmdlets, it is best to add the snap-in in your code rather than asking the script users to run it from a Quest AD Shell window. The Quest AD Snap-in can be added to an existing PowerShell window using the following command: Add-PSSnapin Quest.ActiveRoles.ADManagement After adding the snap-in, you can list the cmdlets provided by this snap-in using the following command: Get-Command -Module Quest.ActiveRoles.ADManagement Get-Command is the cmdlet used to list cmdlets or functions inside a given module or snap-in after importing them. The version (v1.6.0) of Quest AD Shell has 95 cmdlets. Unlike Microsoft Active Directory module, the number of cmdlets will not change from one operating system to another in Quest AD Shell. The list of cmdlets is the same irrespective of the operating system where the tool is installed. One advantage of Quest AD Shell is that it doesn't need Active Directory Web services, which is mandatory for Microsoft Active Directory module. Quest AD Shell works with Windows Server 2003-based domain controllers as well without the need to install Active Directory Management Gateway Service. Testing the functionality Open a new PowerShell window and try the following commands. The Get-QADRootDSE cmdlet should return your current domain information. All the Quest AD Shell cmdlets will have the word QAD prefixed to the noun: Add-PSSnapin -Name Quest.ActiveRoles.ADManagement Get-QADRootDSE Summary In this article, we reviewed the automations operations of Active Directory, its module. The remote server administration with its functionality and different cmdlets to perform the operations on it. Resources for Article: Further resources on this subject: So, what is PowerShell 3.0 WMI? [article] Unleashing Your Development Skills with PowerShell [article] How to use PowerShell Web Access to manage Windows Server [article]
Read more
  • 0
  • 0
  • 1586

article-image-sentiment-analysis-twitter-data-part-1
Janu Verma
21 Jan 2015
4 min read
Save for later

Sentiment Analysis of Twitter Data - Part 1

Janu Verma
21 Jan 2015
4 min read
Twitter represents a fundamentally new instrument to make social measurements. Millions of people voluntarily express their opinions across any topic imaginable — this data source is incredibly valuable for both research and business. There have been numerous studies on this data for sociological, political, economical, and network analytical questions. We can tap the vast amount of data from Twitter to generate “public opinion” towards certain topics by aggregating the individual tweet results over time. Sentiment Analysis aims to determine how a certain person or group reacts to a specific topic. Traditionally, we would run surveys to gather data and do statistical analysis. With Twitter, it works by extracting tweets containing references to the desired topic, computing the sentiment polarity and strength of each tweet, and then aggregating the results for all such tweets. Companies use this information to gather public opinion on their products and services, and make data-informed decisions. We can also track changes in the users’ opinion towards a topic over time, allowing us to identify the events that caused these changes. One of the first studies on Twitter data for sentiment was to study public perception of Obama’s performance as President. Another (fun) example could be the to explore the variation of sentiment regarding the TV series “Game of Thrones.” The unpredictable episode “The Rains of Castamere” resulted in a lot of negative tweets and a peak in the sentiment score. Also, we can look at the geocoded information in the tweets and analyze the relation between location and mood. For example, people in California may be happy about event X, while New Yorkers didn’t like it much. Sentiment analysis employs natural language processing (NLP), text mining and computational linguistics to extract subjective information from the textual data. Applications Sentiment analysis techniques find applications in technology, finance, and research. Some important applications of sentiment analysis are: predicting stocks computing movie-ratings discerning product satisfaction analyzing political or apolitical campaigns Techniques There are broadly two categories of sentiment analysis: Lexical Methods : These techniques employ dictionaries of words annotated with their semantic polarity and sentiment strength. This is then used to calculate a score for the polarity and/or sentiment of the document. Usually this method gives high precision but low recall. Machine Learning Methods: Such techniques require creating a model by training the classifier with labeled examples. This means that you must first gather a dataset with examples for positive, negative and neutral classes, extract the features from the examples and then train the algorithm based on the examples. These methods are used mainly for computing the polarity of the document. The choice of the method heavily depends upon the application, the domain and the language. Using lexicon-based techniques with large dictionaries enables you to achieve very good results. Nevertheless, these techniques require using a lexicon, something which is not always available in all languages. On the other hand, Machine Learning based techniques can deliver good results, but they require obtaining training on labeled data. Here are some examples of companies that use sentiment analysis: AlchemyAPI, based in Denver, is a really cool company that provides resources to do sentiment analysis for an entity on a document or webpage. The Stock Sonar uses sentiment analysis of unstructured text to determine whether online press is being positive or negative towards businesses by identifying lexical sentiment as well as business events. About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and leverages tools from these areas to answer questions in biology. Janu holds a Masters in Theoretical Physics from University of Cambridge in UK, and dropped out from mathematics PhD program (after 3 years) at Kansas State University. He also writes about data science, machine learning and mathematics at Random Inferences. Until Sunday 24th January you can save 50% on our leading Machine Learning titles as we celebrate Machine Learning week. From Python to Spark, and from R to Java, we've got a range of tools and languages covered so you can explore Machine Learning from a range of different perspectives. You can also pick up a free Machine Learning eBook every day this week from our Free Learning page – don’t miss out!
Read more
  • 0
  • 0
  • 4978

article-image-dragging-ccnode-cocos2d-swift
Packt
21 Jan 2015
6 min read
Save for later

Dragging a CCNode in Cocos2D-Swift

Packt
21 Jan 2015
6 min read
 In this article by Ben Trengrove, author of the book Cocos2D Game Development Essentials, we will see how can we update our sprite position according to the touch movement. (For more resources related to this topic, see here.) Very often in development with Cocos2d you will want the ability to drag a node around the screen. It is not a built in behavior but it can be easily coded. To do it you will need to track the touch information. Using this information you will move the sprite to the updated position anytime the touch moves. Lets get started. Add a new Boolean property to your private interface. @interface HelloWorldScene ()@property (nonatomic, assign) BOOL dragging;@end Now, add the following code to the touchBegan method. -(void) touchBegan:(UITouch *)touch withEvent:(UIEvent *)event {  CGPoint touchLoc = [touch locationInNode:self];  if (CGRectContainsPoint(_sprite.boundingBox, touchLoc)) {    self.dragging = YES;    NSLog(@"Start dragging");  }} Add a touchMoved method with the following code. - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event {  CGPoint touchLoc = [touch locationInNode:self];  if (self.dragging) {    _sprite.position = touchLoc;  }} What is being done in these methods is first you check to see if the initial touch was inside the sprite. If it was, we set a Boolean to say that the user is dragging the node. They have in effect picked up the node. Next in the touchMoved method, it is as simple as if the user did touch down on the node and move, set the new position of the node to the touch location. Next we just have to implement the letting go of the sprite. This is done in touchEnded. Implement the touchEnded method as follows. - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event {  self.dragging = NO;} Now, if you build and run the app you will be able to drag around the sprite. There is one small problem however, if you don't grab the sprite in its center you will see that the node snaps its center to the touch. What you really want to happen is just move from where on the node it was touched. You will make this adjustment now. To make this fix you are going to have to calculate the offset on the initial touch from the nodes center point. This will be stored and applied to the final position of the node in touchMoved. Add another property to your private interface. @property (nonatomic, assign) CGPoint dragOffset; Modify your touchBegan method to the following: -(void) touchBegan:(UITouch *)touch withEvent:(UIEvent *)event {  CGPoint touchLoc = [touch locationInNode:self];  CGPoint touchOffset = [touch locationInNode:_sprite];  if (CGRectContainsPoint(_sprite.boundingBox, touchLoc)) {    self.dragging = YES;    NSLog(@"Start dragging");    self.dragOffset = touchOffset;  }} Notice that using the locationinnode method, you can calculate the position of the touch relative to the node. This information is only useful if the touch was indeed inside of the node so you only store it if that is the case. Now, modify your touchMoved method to the following: - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event {  CGPoint touchLoc = [touch locationInNode:self];  //Check if we are already dragging  if (self.dragging) {    CGPoint offsetPosition = ccpSub(touchLoc, self.dragOffset);//Calculate an offset to account for the anchor point        CGPoint anchorPointOffset = CGPointMake(_sprite.anchorPoint.x * _sprite.boundingBox.size.width, _sprite.anchorPoint.y * _sprite.boundingBox.size.height);//Add the offset and anchor point adjustment together to get the final position    CGPoint positionWithAnchorPoint = ccpAdd(offsetPosition, anchorPointOffset);    _sprite.position = positionWithAnchorPoint;  }} The offset position is subtracted from the touch location using the Cocos2d convenience function ccpSub. CcpSub subtracts a point from another point. Using the anchor point and size of the sprite, an adjustment is calculated to account for different anchor points. Once these two points have been calculated, they are added together to create a final sprite position. Build and run the app now, you will now have a very natural dragging mechanic. For reference, here is the complete scene. @interface HelloWorldScene ()@property (nonatomic, assign) BOOL dragging;@property (nonatomic, assign) CGPoint dragOffset;@end- (id)init{  // Apple recommend assigning self with supers return value  self = [super init];  if (!self) return(nil);  // Enable touch handling on scene node  self.userInteractionEnabled = YES;  // Create a colored background (Dark Grey)  CCNodeColor *background = [CCNodeColor nodeWithColor:[CCColor colorWithRed:0.2f green:0.2f blue:0.2f alpha:1.0f]];  [self addChild:background];  // Add a sprite  _sprite = [CCSprite spriteWithImageNamed:@"Icon-72.png"];  _sprite.position  = ccp(self.contentSize.width/2,self.contentSize.height/2);  _sprite.anchorPoint = ccp(0.5, 0.5);  [self addChild:_sprite];  // Create a back button  CCButton *backButton = [CCButton buttonWithTitle:@"[ Menu ]" fontName:@"Verdana-Bold" fontSize:18.0f];  backButton.positionType = CCPositionTypeNormalized;  backButton.position = ccp(0.85f, 0.95f); // Top Right of screen  [backButton setTarget:self selector:@selector(onBackClicked:)];  [self addChild:backButton];  // donereturn self;}// -----------------------------------------------------------------------#pragma mark - Touch Handler// ------------------------------------------------------------------------(void) touchBegan:(UITouch *)touch withEvent:(UIEvent *)event {  CGPoint touchLoc = [touch locationInNode:self];  CGPoint touchOffset = [touch locationInNode:_sprite];  if (CGRectContainsPoint(_sprite.boundingBox, touchLoc)) {    self.dragging = YES;    NSLog(@"Start dragging");    self.dragOffset = touchOffset;  }}- (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event {  CGPoint touchLoc = [touch locationInNode:self];  if (self.dragging) {    CGPoint offsetPosition = ccpSub(touchLoc, self.dragOffset);    CGPoint anchorPointOffset = CGPointMake(_sprite.anchorPoint.x * _sprite.boundingBox.size.width, _sprite.anchorPoint.y * _sprite.boundingBox.size.height);    CGPoint positionWithAnchorPoint = ccpAdd(offsetPosition, anchorPointOffset);    _sprite.position = positionWithAnchorPoint;  }}- (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event {  self.dragging = NO;} Summary In this article, we saw how to update your sprite position according to the touch movement. Resources for Article: Further resources on this subject: Why should I make cross-platform games? [article] Animations in Cocos2d-x [article] Moving the Space Pod Using Touch [article]
Read more
  • 0
  • 0
  • 1985

article-image-highcharts-configurations
Packt
21 Jan 2015
53 min read
Save for later

Highcharts Configurations

Packt
21 Jan 2015
53 min read
This article is written by Joe Kuan, the author of Learning Highcharts 4. All Highcharts graphs share the same configuration structure and it is crucial for us to become familiar with the core components. However, it is not possible to go through all the configurations within the book. In this article, we will explore the functional properties that are most used and demonstrate them with examples. We will learn how Highcharts manages layout, and then explore how to configure axes, specify single series and multiple series data, followed by looking at formatting and styling tool tips in both JavaScript and HTML. After that, we will get to know how to polish our charts with various types of animations and apply color gradients. Finally, we will explore the drilldown interactive feature. In this article, we will cover the following topics: Understanding Highcharts layout Framing the chart with axes (For more resources related to this topic, see here.) Configuration structure In the Highcharts configuration object, the components at the top level represent the skeleton structure of a chart. The following is a list of the major components that are covered in this article: chart: This has configurations for the top-level chart properties such as layouts, dimensions, events, animations, and user interactions series: This is an array of series objects (consisting of data and specific options) for single and multiple series, where the series data can be specified in a number of ways xAxis/yAxis/zAxis: This has configurations for all the axis properties such as labels, styles, range, intervals, plotlines, plot bands, and backgrounds tooltip: This has the layout and format style configurations for the series data tool tips drilldown: This has configurations for drilldown series and the ID field associated with the main series title/subtitle: This has the layout and style configurations for the chart title and subtitle legend: This has the layout and format style configurations for the chart legend plotOptions: This contains all the plotting options, such as display, animation, and user interactions, for common series and specific series types exporting: This has configurations that control the layout and the function of print and export features For reference information concerning all configurations, go to http://api.highcharts.com. Understanding Highcharts' layout Before we start to learn how Highcharts layout works, it is imperative that we understand some basic concepts first. First, set a border around the plot area. To do that we can set the options of plotBorderWidth and plotBorderColor in the chart section, as follows:         chart: {                renderTo: 'container',                type: 'spline',                plotBorderWidth: 1,                plotBorderColor: '#3F4044'        }, The second border is set around the Highcharts container. Next, we extend the preceding chart section with additional settings:         chart: {                renderTo: 'container',                ....                borderColor: '#a1a1a1',                borderWidth: 2,                borderRadius: 3        }, This sets the container border color with a width of 2 pixels and corner radius of 3 pixels. As we can see, there is a border around the container and this is the boundary that the Highcharts display cannot exceed: By default, Highcharts displays have three different areas: spacing, labeling, and plot area. The plot area is the area inside the inner rectangle that contains all the plot graphics. The labeling area is the area where labels such as title, subtitle, axis title, legend, and credits go, around the plot area, so that it is between the edge of the plot area and the inner edge of the spacing area. The spacing area is the area between the container border and the outer edge of the labeling area. The following screenshot shows three different kinds of areas. A gray dotted line is inserted to illustrate the boundary between the spacing and labeling areas. Each chart label position can be operated in one of the following two layouts: Automatic layout: Highcharts automatically adjusts the plot area size based on the labels' positions in the labeling area, so the plot area does not overlap with the label element at all. Automatic layout is the simplest way to configure, but has less control. This is the default way of positioning the chart elements. Fixed layout: There is no concept of labeling area. The chart label is specified in a fixed location so that it has a floating effect on the plot area. In other words, the plot area side does not automatically adjust itself to the adjacent label position. This gives the user full control of exactly how to display the chart. The spacing area controls the offset of the Highcharts display on each side. As long as the chart margins are not defined, increasing or decreasing the spacing area has a global effect on the plot area measurements in both automatic and fixed layouts. Chart margins and spacing settings In this section, we will see how chart margins and spacing settings have an effect on the overall layout. Chart margins can be configured with the properties margin, marginTop, marginLeft, marginRight, and marginBottom, and they are not enabled by default. Setting chart margins has a global effect on the plot area, so that none of the label positions or chart spacing configurations can affect the plot area size. Hence, all the chart elements are in a fixed layout mode with respect to the plot area. The margin option is an array of four margin values covered for each direction, the same as in CSS, starting from north and going clockwise. Also, the margin option has a lower precedence than any of the directional margin options, regardless of their order in the chart section. Spacing configurations are enabled by default with a fixed value on each side. These can be configured in the chart section with the property names spacing, spacingTop, spacingLeft, spacingBottom, and spacingRight. In this example, we are going to increase or decrease the margin or spacing property on each side of the chart and observe the effect. The following are the chart settings:             chart: {                renderTo: 'container',                type: ...                marginTop: 10,                marginRight: 0,                spacingLeft: 30,                spacingBottom: 0            }, The following screenshot shows what the chart looks like: The marginTop property fixes the plot area's top border 10 pixels away from the container border. It also changes the top border into fixed layout for any label elements, so the chart title and subtitle float on top of the plot area. The spacingLeft property increases the spacing area on the left-hand side, so it pushes the y axis title further in. As it is in automatic layout (without declaring marginLeft), it also pushes the plot area's west border in. Setting marginRight to 0 will override all the default spacing on the chart's right-hand side and change it to fixed layout mode. Finally, setting spacingBottom to 0 makes the legend touch the lower bar of the container, so it also stretches the plot area downwards. This is because the bottom edge is still in automatic layout even though spacingBottom is set to 0. Chart label properties Chart labels such as xAxis.title, yAxis.title, legend, title, subtitle, and credits share common property names, as follows: align: This is for the horizontal alignment of the label. Possible keywords are 'left', 'center', and 'right'. As for the axis title, it is 'low', 'middle', and 'high'. floating: This is to give the label position a floating effect on the plot area. Setting this to true will cause the label position to have no effect on the adjacent plot area's boundary. margin: This is the margin setting between the label and the side of the plot area adjacent to it. Only certain label types have this setting. verticalAlign: This is for the vertical alignment of the label. The keywords are 'top', 'middle', and 'bottom'. x: This is for horizontal positioning in relation to alignment. y: This is for vertical positioning in relation to alignment. As for the labels' x and y positioning, they are not used for absolute positioning within the chart. They are designed for fine adjustment with the label alignment. The following diagram shows the coordinate directions, where the center represents the label location: We can experiment with these properties with a simple example of the align and y position settings, by placing both title and subtitle next to each other. The title is shifted to the left with align set to 'left', whereas the subtitle alignment is set to 'right'. In order to make both titles appear on the same line, we change the subtitle's y position to 15, which is the same as the title's default y value:  title: {     text: 'Web browsers ...',     align: 'left' }, subtitle: {     text: 'From 2008 to present',     align: 'right',     y: 15 }, The following is a screenshot showing both titles aligned on the same line: In the following subsections, we will experiment with how changes in alignment for each label element affect the layout behavior of the plot area. Title and subtitle alignments Title and subtitle have the same layout properties, and the only differences are that the default values and title have the margin setting. Specifying verticalAlign for any value changes from the default automatic layout to fixed layout (it internally switches floating to true). However, manually setting the subtitle's floating property to false does not switch back to automatic layout. The following is an example of title in automatic layout and subtitle in fixed layout:     title: {       text: 'Web browsers statistics'    },    subtitle: {       text: 'From 2008 to present',       verticalAlign: 'top',       y: 60       }, The verticalAlign property for the subtitle is set to 'top', which switches the layout into fixed layout, and the y offset is increased to 60. The y offset pushes the subtitle's position further down. Due to the fact that the plot area is not in an automatic layout relationship to the subtitle anymore, the top border of the plot area goes above the subtitle. However, the plot area is still in automatic layout towards the title, so the title is still above the plot area: Legend alignment Legends show different behavior for the verticalAlign and align properties. Apart from setting the alignment to 'center', all other settings in verticalAlign and align remain in automatic positioning. The following is an example of a legend located on the right-hand side of the chart. The verticalAlign property is switched to the middle of the chart, where the horizontal align is set to 'right':           legend: {                align: 'right',                verticalAlign: 'middle',                layout: 'vertical'          }, The layout property is assigned to 'vertical' so that it causes the items inside the legend box to be displayed in a vertical manner. As we can see, the plot area is automatically resized for the legend box: Note that the border decoration around the legend box is disabled in the newer version. To display a round border around the legend box, we can add the borderWidth and borderRadius options using the following:           legend: {                align: 'right',                verticalAlign: 'middle',                layout: 'vertical',                borderWidth: 1,                borderRadius: 3          }, Here is the legend box with a round corner border: Axis title alignment Axis titles do not use verticalAlign. Instead, they use the align setting, which is either 'low', 'middle', or 'high'. The title's margin value is the distance between the axis title and the axis line. The following is an example of showing the y-axis title rotated horizontally instead of vertically (which it is by default) and displayed on the top of the axis line instead of next to it. We also use the y property to fine-tune the title location:             yAxis: {                title: {                    text: 'Percentage %',                    rotation: 0,                    y: -15,                    margin: -70,                    align: 'high'                },                min: 0            }, The following is a screenshot of the upper-left corner of the chart showing that the title is aligned horizontally at the top of the y axis. Alternatively, we can use the offset option instead of margin to achieve the same result. Credits alignment Credits is a bit different from other label elements. It only supports the align, verticalAlign, x, and y properties in the credits.position property (shorthand for credits: { position: … }), and is also not affected by any spacing setting. Suppose we have a graph without a legend and we have to move the credits to the lower-left area of the chart, the following code snippet shows how to do it:             legend: {                enabled: false            },            credits: {                position: {                   align: 'left'                },                text: 'Joe Kuan',                href: 'http://joekuan.wordpress.com'            }, However, the credits text is off the edge of the chart, as shown in the following screenshot: Even if we move the credits label to the right with x positioning, the label is still a bit too close to the x axis interval label. We can introduce extra spacingBottom to put a gap between both labels, as follows:             chart: {                   spacingBottom: 30,                    ....            },            credits: {                position: {                   align: 'left',                   x: 20,                   y: -7                },            },            .... The following is a screenshot of the credits with the final adjustments: Experimenting with an automatic layout In this section, we will examine the automatic layout feature in more detail. For the sake of simplifying the example, we will start with only the chart title and without any chart spacing settings:      chart: {         renderTo: 'container',         // border and plotBorder settings         borderWidth: 2,         .....     },     title: {            text: 'Web browsers statistics,     }, From the preceding example, the chart title should appear as expected between the container and the plot area's borders: The space between the title and the top border of the container has the default setting spacingTop for the spacing area (a default value of 10-pixels high). The gap between the title and the top border of the plot area is the default setting for title.margin, which is 15-pixels high. By setting spacingTop in the chart section to 0, the chart title moves up next to the container top border. Hence the size of the plot area is automatically expanded upwards, as follows: Then, we set title.margin to 0; the plot area border moves further up, hence the height of the plot area increases further, as follows: As you may notice, there is still a gap of a few pixels between the top border and the chart title. This is actually due to the default value of the title's y position setting, which is 15 pixels, large enough for the default title font size. The following is the chart configuration for setting all the spaces between the container and the plot area to 0: chart: {     renderTo: 'container',     // border and plotBorder settings     .....     spacingTop: 0},title: {     text: null,     margin: 0,     y: 0} If we set title.y to 0, all the gap between the top edge of the plot area and the top container edge closes up. The following is the final screenshot of the upper-left corner of the chart, to show the effect. The chart title is not visible anymore as it has been shifted above the container: Interestingly, if we work backwards to the first example, the default distance between the top of the plot area and the top of the container is calculated as: spacingTop + title.margin + title.y = 10 + 15 + 15 = 40 Therefore, changing any of these three variables will automatically adjust the plot area from the top container bar. Each of these offset variables actually has its own purpose in the automatic layout. Spacing is for the gap between the container and the chart content; thus, if we want to display a chart nicely spaced with other elements on a web page, spacing elements should be used. Equally, if we want to use a specific font size for the label elements, we should consider adjusting the y offset. Hence, the labels are still maintained at a distance and do not interfere with other components in the chart. Experimenting with a fixed layout In the preceding section, we have learned how the plot area dynamically adjusted itself. In this section, we will see how we can manually position the chart labels. First, we will start with the example code from the beginning of the Experimenting with automatic layout section and set the chart title's verticalAlign to 'bottom', as follows: chart: {    renderTo: 'container',    // border and plotBorder settings    .....},title: {    text: 'Web browsers statistics',    verticalAlign: 'bottom'}, The chart title is moved to the bottom of the chart, next to the lower border of the container. Notice that this setting has changed the title into floating mode; more importantly, the legend still remains in the default automatic layout of the plot area: Be aware that we haven't specified spacingBottom, which has a default value of 15 pixels in height when applied to the chart. This means that there should be a gap between the title and the container bottom border, but none is shown. This is because the title.y position has a default value of 15 pixels in relation to spacing. According to the diagram in the Chart label properties section, this positive y value pushes the title towards the bottom border; this compensates for the space created by spacingBottom. Let's make a bigger change to the y offset position this time to show that verticalAlign is floating on top of the plot area:  title: {     text: 'Web browsers statistics',     verticalAlign: 'bottom',     y: -90 }, The negative y value moves the title up, as shown here: Now the title is overlapping the plot area. To demonstrate that the legend is still in automatic layout with regard to the plot area, here we change the legend's y position and the margin settings, which is the distance from the axis label:                legend: {                   margin: 70,                   y: -10               }, This has pushed up the bottom side of the plot area. However, the chart title still remains in fixed layout and its position within the chart hasn't been changed at all after applying the new legend setting, as shown in the following screenshot: By now, we should have a better understanding of how to position label elements, and their layout policy relating to the plot area. Framing the chart with axes In this section, we are going to look into the configuration of axes in Highcharts in terms of their functional area. We will start off with a plain line graph and gradually apply more options to the chart to demonstrate the effects. Accessing the axis data type There are two ways to specify data for a chart: categories and series data. For displaying intervals with specific names, we should use the categories field that expects an array of strings. Each entry in the categories array is then associated with the series data array. Alternatively, the axis interval values are embedded inside the series data array. Then, Highcharts extracts the series data for both axes, interprets the data type, and formats and labels the values appropriately. The following is a straightforward example showing the use of categories:     chart: {        renderTo: 'container',        height: 250,        spacingRight: 20    },    title: {        text: 'Market Data: Nasdaq 100'    },    subtitle: {        text: 'May 11, 2012'    },    xAxis: {        categories: [ '9:30 am', '10:00 am', '10:30 am',                       '11:00 am', '11:30 am', '12:00 pm',                       '12:30 pm', '1:00 pm', '1:30 pm',                       '2:00 pm', '2:30 pm', '3:00 pm',                       '3:30 pm', '4:00 pm' ],         labels: {             step: 3         }     },     yAxis: {         title: {             text: null         }     },     legend: {         enabled: false     },     credits: {         enabled: false     },     series: [{         name: 'Nasdaq',         color: '#4572A7',         data: [ 2606.01, 2622.08, 2636.03, 2637.78, 2639.15,                 2637.09, 2633.38, 2632.23, 2632.33, 2632.59,                 2630.34, 2626.89, 2624.59, 2615.98 ]     }] The preceding code snippet produces a graph that looks like the following screenshot: The first name in the categories field corresponds to the first value, 9:30 am, 2606.01, in the series data array, and so on. Alternatively, we can specify the time values inside the series data and use the type property of the x axis to format the time. The type property supports 'linear' (default), 'logarithmic', or 'datetime'. The 'datetime' setting automatically interprets the time in the series data into human-readable form. Moreover, we can use the dateTimeLabelFormats property to predefine the custom format for the time unit. The option can also accept multiple time unit formats. This is for when we don't know in advance how long the time span is in the series data, so each unit in the resulting graph can be per hour, per day, and so on. The following example shows how the graph is specified with predefined hourly and minute formats. The syntax of the format string is based on the PHP strftime function:     xAxis: {         type: 'datetime',          // Format 24 hour time to AM/PM          dateTimeLabelFormats: {                hour: '%I:%M %P',              minute: '%I %M'          }               },     series: [{         name: 'Nasdaq',         color: '#4572A7',         data: [ [ Date.UTC(2012, 4, 11, 9, 30), 2606.01 ],                  [ Date.UTC(2012, 4, 11, 10), 2622.08 ],                   [ Date.UTC(2012, 4, 11, 10, 30), 2636.03 ],                  .....                ]     }] Note that the x axis is in the 12-hour time format, as shown in the following screenshot: Instead, we can define the format handler for the xAxis.labels.formatter property to achieve a similar effect. Highcharts provides a utility routine, Highcharts.dateFormat, that converts the timestamp in milliseconds to a readable format. In the following code snippet, we define the formatter function using dateFormat and this.value. The keyword this is the axis's interval object, whereas this.value is the UTC time value for the instance of the interval:     xAxis: {         type: 'datetime',         labels: {             formatter: function() {                 return Highcharts.dateFormat('%I:%M %P', this.value);             }         }     }, Since the time values of our data points are in fixed intervals, they can also be arranged in a cut-down version. All we need is to define the starting point of time, pointStart, and the regular interval between them, pointInterval, in milliseconds: series: [{     name: 'Nasdaq',     color: '#4572A7',     pointStart: Date.UTC(2012, 4, 11, 9, 30),     pointInterval: 30 * 60 * 1000,     data: [ 2606.01, 2622.08, 2636.03, 2637.78,             2639.15, 2637.09, 2633.38, 2632.23,             2632.33, 2632.59, 2630.34, 2626.89,             2624.59, 2615.98 ] }] Adjusting intervals and background We have learned how to use axis categories and series data arrays in the last section. In this section, we will see how to format interval lines and the background style to produce a graph with more clarity. We will continue from the previous example. First, let's create some interval lines along the y axis. In the chart, the interval is automatically set to 20. However, it would be clearer to double the number of interval lines. To do that, simply assign the tickInterval value to 10. Then, we use minorTickInterval to put another line in between the intervals to indicate a semi-interval. In order to distinguish between interval and semi-interval lines, we set the semi-interval lines, minorGridLineDashStyle, to a dashed and dotted style. There are nearly a dozen line style settings available in Highcharts, from 'Solid' to 'LongDashDotDot'. Readers can refer to the online manual for possible values. The following is the first step to create the new settings:             yAxis: {                 title: {                     text: null                 },                 tickInterval: 10,                 minorTickInterval: 5,                 minorGridLineColor: '#ADADAD',                 minorGridLineDashStyle: 'dashdot'            } The interval lines should look like the following screenshot: To make the graph even more presentable, we add a striping effect with shading using alternateGridColor. Then, we change the interval line color, gridLineColor, to a similar range with the stripes. The following code snippet is added into the yAxis configuration:                 gridLineColor: '#8AB8E6',                 alternateGridColor: {                     linearGradient: {                         x1: 0, y1: 1,                         x2: 1, y2: 1                     },                     stops: [ [0, '#FAFCFF' ],                              [0.5, '#F5FAFF'] ,                              [0.8, '#E0F0FF'] ,                              [1, '#D6EBFF'] ]                   } The following is the graph with the new shading background: The next step is to apply a more professional look to the y axis line. We are going to draw a line on the y axis with the lineWidth property, and add some measurement marks along the interval lines with the following code snippet:                  lineWidth: 2,                  lineColor: '#92A8CD',                  tickWidth: 3,                  tickLength: 6,                  tickColor: '#92A8CD',                  minorTickLength: 3,                  minorTickWidth: 1,                  minorTickColor: '#D8D8D8' The tickWidth and tickLength properties add the effect of little marks at the start of each interval line. We apply the same color on both the interval mark and the axis line. Then we add the ticks minorTickLength and minorTickWidth into the semi-interval lines in a smaller size. This gives a nice measurement mark effect along the axis, as shown in the following screenshot: Now, we apply a similar polish to the xAxis configuration, as follows:            xAxis: {                type: 'datetime',                labels: {                    formatter: function() {                        return Highcharts.dateFormat('%I:%M %P', this.value);                    },                },                gridLineDashStyle: 'dot',                gridLineWidth: 1,                tickInterval: 60 * 60 * 1000,                lineWidth: 2,                lineColor: '#92A8CD',                tickWidth: 3,                tickLength: 6,                tickColor: '#92A8CD',            }, We set the x axis interval lines to the hourly format and switch the line style to a dotted line. Then, we apply the same color, thickness, and interval ticks as on the y axis. The following is the resulting screenshot: However, there are some defects along the x axis line. To begin with, the meeting point between the x axis and y axis lines does not align properly. Secondly, the interval labels at the x axis are touching the interval ticks. Finally, part of the first data point is covered by the y-axis line. The following is an enlarged screenshot showing the issues: There are two ways to resolve the axis line alignment problem, as follows: Shift the plot area 1 pixel away from the x axis. This can be achieved by setting the offset property of xAxis to 1. Increase the x-axis line width to 3 pixels, which is the same width as the y-axis tick interval. As for the x-axis label, we can simply solve the problem by introducing the y offset value into the labels setting. Finally, to avoid the first data point touching the y-axis line, we can impose minPadding on the x axis. What this does is to add padding space at the minimum value of the axis, the first point. The minPadding value is based on the ratio of the graph width. In this case, setting the property to 0.02 is equivalent to shifting along the x axis 5 pixels to the right (250 px * 0.02). The following are the additional settings to improve the chart:     xAxis: {         ....         labels: {                formatter: ...,                y: 17         },         .....         minPadding: 0.02,         offset: 1     } The following screenshot shows that the issues have been addressed: As we can see, Highcharts has a comprehensive set of configurable variables with great flexibility. Using plot lines and plot bands In this section, we are going to see how we can use Highcharts to place lines or bands along the axis. We will continue with the example from the previous section. Let's draw a couple of lines to indicate the day's highest and lowest index points on the y axis. The plotLines field accepts an array of object configurations for each plot line. There are no width and color default values for plotLines, so we need to specify them explicitly in order to see the line. The following is the code snippet for the plot lines:       yAxis: {               ... ,               plotLines: [{                    value: 2606.01,                    width: 2,                    color: '#821740',                    label: {                        text: 'Lowest: 2606.01',                        style: {                            color: '#898989'                        }                    }               }, {                    value: 2639.15,                    width: 2,                    color: '#4A9338',                    label: {                        text: 'Highest: 2639.15',                        style: {                            color: '#898989'                        }                    }               }]         } The following screenshot shows what it should look like: We can improve the look of the chart slightly. First, the text label for the top plot line should not be next to the highest point. Second, the label for the bottom line should be remotely covered by the series and interval lines, as follows: To resolve these issues, we can assign the plot line's zIndex to 1, which brings the text label above the interval lines. We also set the x position of the label to shift the text next to the point. The following are the new changes:              plotLines: [{                    ... ,                    label: {                        ... ,                        x: 25                    },                    zIndex: 1                    }, {                    ... ,                    label: {                        ... ,                        x: 130                    },                    zIndex: 1               }] The following graph shows the label has been moved away from the plot line and over the interval line: Now, we are going to change the preceding example with a plot band area that shows the index change between the market's opening and closing values. The plot band configuration is very similar to plot lines, except that it uses the to and from properties, and the color property accepts gradient settings or color code. We create a plot band with a triangle text symbol and values to signify a positive close. Instead of using the x and y properties to fine-tune label position, we use the align option to adjust the text to the center of the plot area (replace the plotLines setting from the above example):               plotBands: [{                    from: 2606.01,                    to: 2615.98,                    label: {                        text: '▲ 9.97 (0.38%)',                        align: 'center',                        style: {                            color: '#007A3D'                        }                    },                    zIndex: 1,                    color: {                        linearGradient: {                            x1: 0, y1: 1,                            x2: 1, y2: 1                        },                        stops: [ [0, '#EBFAEB' ],                                 [0.5, '#C2F0C2'] ,                                 [0.8, '#ADEBAD'] ,                                 [1, '#99E699']                        ]                    }               }] The triangle is an alt-code character; hold down the left Alt key and enter 30 in the number keypad. See http://www.alt-codes.net for more details. This produces a chart with a green plot band highlighting a positive close in the market, as shown in the following screenshot: Extending to multiple axes Previously, we ran through most of the axis configurations. Here, we explore how we can use multiple axes, which are just an array of objects containing axis configurations. Continuing from the previous stock market example, suppose we now want to include another market index, Dow Jones, along with Nasdaq. However, both indices are different in nature, so their value ranges are vastly different. First, let's examine the outcome by displaying both indices with the common y axis. We change the title, remove the fixed interval setting on the y axis, and include data for another series:             chart: ... ,             title: {                 text: 'Market Data: Nasdaq & Dow Jones'             },             subtitle: ... ,             xAxis: ... ,             credits: ... ,             yAxis: {                 title: {                     text: null                 },                 minorGridLineColor: '#D8D8D8',                 minorGridLineDashStyle: 'dashdot',                 gridLineColor: '#8AB8E6',                 alternateGridColor: {                     linearGradient: {                         x1: 0, y1: 1,                         x2: 1, y2: 1                     },                     stops: [ [0, '#FAFCFF' ],                              [0.5, '#F5FAFF'] ,                              [0.8, '#E0F0FF'] ,                              [1, '#D6EBFF'] ]                 },                 lineWidth: 2,                 lineColor: '#92A8CD',                 tickWidth: 3,                 tickLength: 6,                 tickColor: '#92A8CD',                 minorTickLength: 3,                 minorTickWidth: 1,                 minorTickColor: '#D8D8D8'             },             series: [{               name: 'Nasdaq',               color: '#4572A7',               data: [ [ Date.UTC(2012, 4, 11, 9, 30), 2606.01 ],                          [ Date.UTC(2012, 4, 11, 10), 2622.08 ],                           [ Date.UTC(2012, 4, 11, 10, 30), 2636.03 ],                          ...                        ]             }, {               name: 'Dow Jones',               color: '#AA4643',               data: [ [ Date.UTC(2012, 4, 11, 9, 30), 12598.32 ],                          [ Date.UTC(2012, 4, 11, 10), 12538.61 ],                           [ Date.UTC(2012, 4, 11, 10, 30), 12549.89 ],                          ...                        ]             }] The following is the chart showing both market indices: As expected, the index changes that occur during the day have been normalized by the vast differences in value. Both lines look roughly straight, which falsely implies that the indices have hardly changed. Let us now explore putting both indices onto separate y axes. We should remove any background decoration on the y axis, because we now have a different range of data shared on the same background. The following is the new setup for yAxis:            yAxis: [{                  title: {                     text: 'Nasdaq'                 },               }, {                 title: {                     text: 'Dow Jones'                 },                 opposite: true             }], Now yAxis is an array of axis configurations. The first entry in the array is for Nasdaq and the second is for Dow Jones. This time, we display the axis title to distinguish between them. The opposite property is to put the Dow Jones y axis onto the other side of the graph for clarity. Otherwise, both y axes appear on the left-hand side. The next step is to align indices from the y-axis array to the series data array, as follows:             series: [{                 name: 'Nasdaq',                 color: '#4572A7',                 yAxis: 0,                 data: [ ... ]             }, {                 name: 'Dow Jones',                 color: '#AA4643',                 yAxis: 1,                 data: [ ... ]             }]          We can clearly see the movement of the indices in the new graph, as follows: Moreover, we can improve the final view by color-matching the series to the axis lines. The Highcharts.getOptions().colors property contains a list of default colors for the series, so we use the first two entries for our indices. Another improvement is to set maxPadding for the x axis, because the new y-axis line covers parts of the data points at the high end of the x axis:             xAxis: {                 ... ,                 minPadding: 0.02,                 maxPadding: 0.02                 },             yAxis: [{                 title: {                     text: 'Nasdaq'                 },                 lineWidth: 2,                 lineColor: '#4572A7',                 tickWidth: 3,                 tickLength: 6,                 tickColor: '#4572A7'             }, {                 title: {                     text: 'Dow Jones'                 },                 opposite: true,                 lineWidth: 2,                 lineColor: '#AA4643',                 tickWidth: 3,                 tickLength: 6,                 tickColor: '#AA4643'             }], The following screenshot shows the improved look of the chart: We can extend the preceding example and have more than a couple of axes, simply by adding entries into the yAxis and series arrays, and mapping both together. The following screenshot shows a 4-axis line graph: Summary In this article, major configuration components were discussed and experimented with, and examples shown. By now, we should be comfortable with what we have covered already and ready to plot some of the basic graphs with more elaborate styles. Resources for Article: Further resources on this subject: Theming with Highcharts [article] Integrating with other Frameworks [article] Highcharts [article]
Read more
  • 0
  • 0
  • 9155

article-image-servicestack-applications
Packt
21 Jan 2015
9 min read
Save for later

ServiceStack applications

Packt
21 Jan 2015
9 min read
In this article by Kyle Hodgson and Darren Reid, authors of the book ServiceStack 4 Cookbook, we'll learn about unit testing ServiceStack applications. (For more resources related to this topic, see here.) Unit testing ServiceStack applications In this recipe, we'll focus on simple techniques to test individual units of code within a ServiceStack application. We will use the ServiceStack testing helper BasicAppHost as an application container, as it provides us with some useful helpers to inject a test double for our database. Our goal is small; fast tests that test one unit of code within our application. Getting ready We are going to need some services to test, so we are going to use the PlacesToVisit application. How to do it… Create a new testing project. It's a common convention to name the testing project <ProjectName>.Tests—so in our case, we'll call it PlacesToVisit.Tests. Create a class within this project to contain the tests we'll write—let's name it PlacesServiceTests as the tests within it will focus on the PlacesService class. Annotate this class with the [TestFixture] attribute, as follows: [TestFixture]public class PlaceServiceTests{ We'll want one method that runs whenever this set of tests begins to set up the environment and another one that runs afterwards to tear the environment down. These will be annotated with the NUnit attributes of TestFixtureSetUp and TextFixtureTearDown, respectively. Let's name them FixtureInit and FixtureTearDown. In the FixtureInit method, we will use BasicAppHost to initialize our appHost test container. We'll make it a field so that we can easily access it in each test, as follows: ServiceStackHost appHost; [TestFixtureSetUp]public void FixtureInit(){appHost = new BasicAppHost(typeof(PlaceService).Assembly){   ConfigureContainer = container =>   {     container.Register<IDbConnectionFactory>(c =>       new OrmLiteConnectionFactory(         ":memory:", SqliteDialect.Provider));     container.RegisterAutoWiredAs<PlacesToVisitRepository,       IPlacesToVisitRepository>();   }}.Init();} The ConfigureContainer property on BasicAppHost allows us to pass in a function that we want AppHost to run inside of the Configure method. In this case, you can see that we're registering OrmLiteConnectionFactory with an in-memory SQLite instance. This allows us to test code that uses a database without that database actually running. This useful technique could be considered a classic unit testing approach—the mockist approach might have been to mock the database instead. The FixtureTearDown method will dispose of appHost as you might imagine. This is how the code will look: [TestFixtureTearDown]public void FixtureTearDown(){appHost.Dispose();} We haven't created any data in our in memory database yet. We'll want to ensure the data is the same prior to each test, so our TestInit method is a good place to do that—it will be run once before each and every test run as we'll annotate it with the [SetUp] attribute, as follows: [SetUp]public void TestInit(){using (var db = appHost.Container     .Resolve<IDbConnectionFactory>().Open()){   db.DropAndCreateTable<Place>();   db.InsertAll(PlaceSeedData.GetSeedPlaces());}} As our tests all focus on PlaceService, we'll make sure to create Place data. Next, we'll begin writing tests. Let's start with one that asserts that we can create new places. The first step is to create the new method, name it appropriately, and annotate it with the [Test] attribute, as follows: [Test]public void ShouldAddNewPlaces(){ Next, we'll create an instance of PlaceService that we can test against. We'll use the Funq IoC TryResolve method for this: var placeService = appHost.TryResolve<PlaceService>(); We'll want to create a new place, then query the database later to see whether the new one was added. So, it's useful to start by getting a count of how many places there are based on just the seed data. Here's how you can get the count based on the seed data: var startingCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places               .Count; Since we're testing the ability to handle a CreatePlaceToVisit request, we'll need a test object that we can send the service to. Let's create one and then go ahead and post it: var melbourne = new CreatePlaceToVisit{   Name = "Melbourne",   Description = "A nice city to holiday"}; placeService.Post(melbourne); Having done that, we can get the updated count and then assert that there is one more item in the database than there were before: var newCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places              .Count;Assert.That(newCount == startingCount + 1); Next, let's fetch the new record that was created and make an assertion that it's the one we want: var newPlace = placeService.Get(new PlaceToVisitRequest{   Id = startingCount + 1});Assert.That(newPlace.Place.Name == melbourne.Name);} With this in place, if we run the test, we'll expect it to pass both assertions. This proves that we can add new places via PlaceService registered with Funq, and that when we do that we can go and retrieve them later as expected. We can also build a similar test that asserts that on our ability to update an existing place. Adding the code is simple, following the pattern we set out previously. We'll start with the arrange section of the test, creating the variables and objects we'll need: [Test]public void ShouldUpdateExistingPlaces(){var placeService = appHost.TryResolve<PlaceService>();var startingPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var startingCount = startingPlaces.Count;  var canberra = startingPlaces     .First(c => c.Name.Equals("Canberra")); const string canberrasNewName = "Canberra, ACT";canberra.Name = canberrasNewName; Once they're in place, we'll act. In this case, the Put method on placeService has the responsibility for update operations: placeService.Put(canberra.ConvertTo<UpdatePlaceToVisit>()); Think of the ConvertTo helper method from ServiceStack as an auto-mapper, which converts our Place object for us. Now that we've updated the record for Canberra, we'll proceed to the assert section of the test, as follows: var updatedPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var updatedCanberra = updatedPlaces     .First(p => p.Id.Equals(canberra.Id));var updatedCount = updatedPlaces.Count; Assert.That(updatedCanberra.Name == canberrasNewName);Assert.That(updatedCount == startingCount);} How it works… These unit tests are using a few different patterns that help us write concise tests, including the development of our own test helpers, and with helpers from the ServiceStack.Testing namespace, for instance BasicAppHost allows us to set up an application host instance without actually hosting a web service. It also lets us provide a custom ConfigureContainer action to mock any of our dependencies for our services and seed our testing data, as follows: appHost = new BasicAppHost(typeof(PlaceService).Assembly){ConfigureContainer = container =>{   container.Register<IDbConnectionFactory>(c =>     new OrmLiteConnectionFactory(     ":memory:", SqliteDialect.Provider));    container.RegisterAutoWiredAs<PlacesToVisitRepository,     IPlacesToVisitRepository>();}}.Init(); To test any ServiceStack service, you can resolve it through the application host via TryResolve<ServiceType>().This will have the IoC container instantiate an object of the type requested. This gives us the ability to test the Get method independent of other aspects of our web service, such as validation. This is shown in the following code: var placeService = appHost.TryResolve<PlaceService>(); In this example, we are using an in-memory SQLite instance to mock our use of OrmLite for data access, which IPlacesToVisitRepository will also use as well as seeding our test data in our ConfigureContainer hook of BasicAppHost. The use of both in-memory SQLite and BasicAppHost provide fast unit tests to very quickly iterate our application services while ensuring we are not breaking any functionality specifically associated with this component. In the example provided, we are running three tests in less than 100 milliseconds. If you are using the full version of Visual Studio, extensions such as NCrunch can allow you to regularly run your unit tests while you make changes to your code. The performance of ServiceStack components and the use of these extensions results in a smooth developer experience with productivity and quality of code. There's more… In the examples in this article, we wrote out tests that would pass, ran them, and saw that they passed (no surprise). While this makes explaining things a bit simpler, it's not really a best practice. You generally want to make sure your tests fail when presented with wrong data at some point. The authors have seen many cases where subtle bugs in test code were causing a test to pass that should not have passed. One best practice is to write tests so that they fail first and then make them pass—this guarantees that the test can actually detect the defect you're guarding against. This is commonly referred to as the red/green/refactor pattern. Summary In this article, we covered some techniques to unit test ServiceStack applications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [article] Web API and Client Integration [article] WebSockets in Wildfly [article]
Read more
  • 0
  • 0
  • 2117
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-arcgis-spatial-analyst
Packt
20 Jan 2015
16 min read
Save for later

ArcGIS Spatial Analyst

Packt
20 Jan 2015
16 min read
In this article by Daniela Cristiana Docan, author of ArcGIS for Desktop Cookbook, we will learn that the ArcGIS Spatial Analyst extension offers a lot of great tools for geoprocessing raster data. Most of the Spatial Analyst tools generate a new raster output. Before starting a raster analysis session, it's best practice to set the main analysis environment parameters settings (for example, scratch the workspace, extent, and cell size of the output raster). In this article, you will store all raster datasets in file geodatabase as file geodatabase raster datasets. (For more resources related to this topic, see here.) Analyzing surfaces In this recipe, you will represent 3D surface data in a two-dimensional environment. To represent 3D surface data in the ArcMap 2D environment, you will use hillshades and contours. You can use the hillshade raster as a background for other raster or vector data in ArcMap. Using the surface analysis tools, you can derive new surface data, such as slope and aspect or locations visibility. Getting ready In the surface analysis context: The term slope refers to the steepness of raster cells Aspect defines the orientation or compass direction of a cell Visibility identifies which raster cells are visible from a surface location In this recipe, you will prepare your data for analysis by creating an elevation surface named Elevation from vector data. The two feature classes involved are the PointElevation point feature class and the ContourLine polyline feature class. All other output raster datasets will derive from the Elevation raster. How to do it... Follow these steps to prepare your data for spatial analysis: Start ArcMap and open the existing map document, SurfaceAnalysis.mxd, from <drive>:PacktPublishingDataSpatialAnalyst. Go to Customize | Extensions and check the Spatial Analyst extension. Open ArcToolbox, right-click on the ArcToolbox toolbox, and select Environments. Set the geoprocessing environment as follows: Workspace | Current Workspace: DataSpatialAnalystTOPO5000.gdb and Scratch Workspace: DataSpatialAnalystScratchTOPO5000.gdb. Output Coordinates: Same as Input. Raster Analysis | Cell Size: As Specified below: type 0.5 with unit as m. Mask: SpatialAnalystTOPO5000.gdbTrapezoid5k. Raster Storage | Pyramid: check Build pyramids and Pyramid levels: type 3. Click on OK. In ArcToolbox, expand Spatial Analyst Tools | Interpolation, and double-click on the Topo to Raster tool to open the dialog box. Click on Show Help to see the meaning of every parameter. Set the following parameters: Input feature data: PointElevation Field: Elevation and Type: PointElevation ContourLine Field: Elevation and Type: Contour WatercourseA Type: Lake Output surface raster: ...ScratchTOPO5000.gdbElevation Output extent (optional): ContourLine Drainage enforcement (optional): NO_ENFORCE Accept the default values for all other parameters. Click on OK. The Elevation raster is a continuous thematic raster. The raster cells are arranged in 4,967 rows and 4,656 columns. Open Layer Properties | Source of the raster and explore the following properties: Data Type (File Geodatabase Raster Dataset), Cell Size (0.5 meters) or Spatial Reference (EPSG: 3844). In the Layer Properties window, click on the Symbology tab. Select the Stretched display method for the continuous raster cell values as follows: Show: Stretched and Color Ramp: Surface. Click on OK. Explore the cell values using the following two options: Go to Layer Properties | Display and check Show MapTips Add the Spatial Analyst toolbar, and from Customize | Commands, add the Pixel Inspector tool Let's create a hillshade raster using the Elevation layer: Expand Spatial Analyst Tools | Interpolation and double-click on the Hillshade tool to open the dialog box. Set the following parameters: Input raster: ScratchTOPO5000.gdbElevation Output raster: ScratchTOPO5000.gdbHillshade Azimuth (optional): 315 and Altitude (optional): 45 Accept the default value for Z factor and leave the Model shadows option unchecked. Click on OK. From time to time, please ensure to save the map document as MySurfaceAnalysis.mxd at ...DataSpatialAnalyst. The Hillshade raster is a discrete thematic raster that has an associated attribute table known as Value Attribute Table (VAT). Right-click on the Hillshade raster layer and select Open Attribute Table. The Value field stores the illumination values of the raster cells based on the position of the light source. The 0 value (black) means that 25406 cells are not illuminated by the sun, and 254 value (white) means that 992 cells are entirely illuminated. Close the table. In the Table Of Contents section, drag the Hillshade layer below the Elevation layer, and use the Effects | Transparency tool to add a transparency effect for the Elevation raster layer, as shown in the following screenshot: In the next step, you will derive a raster of slope and aspect from the Elevation layer. Expand Spatial Analyst Tools | Interpolation and double-click on the Slope tool to open the dialog box. Set the following parameters: Input raster: Elevation Output raster: ScratchTOPO5000.gdbSlopePercent Output measurement (optional): PERCENT_RISE Click on OK. Symbolize the layer using the Classified method, as follows: Show: Classified. In the Classification section, click on Classify and select the Manual classification method. You will add seven classes. To add break values, right-click on the empty space of the Insert Break graph. To delete one, select the break value from the graph, and right-click to select Delete Break. Do not erase the last break value, which represents the maximum value. Secondly, in the Break Values section, edit the following six values: 5; 7; 15; 20; 60; 90, and leave unchanged the seventh value (496,6). Select Slope (green to red) for Color Ramp. Click on OK. The green areas represent flatter slopes, while the red areas represent steep slopes, as shown in the following screenshot: Expand Spatial Analyst Tools | Interpolation and double click on the Aspect tool to open the dialog box. Set the following parameters: Input raster: Elevation Output raster: ScratchTOPO5000.gdbAspect Click on OK. Symbolize the Aspect layer. For Classify, click on the Manual classification method. You will add five classes. To add or delete break values, right-click on the empty space of the graph, and select Insert / Delete Break. Secondly, edit the following four values: 0; 90; 180; 270, leaving unchanged the fifth value in the Break Values section. Click on OK. In the Symbology window, edit the labels of the five classes as shown in the following picture. Click on OK. In the Table Of Contents section, select the <VALUE> label, and type Slope Direction. The following screenshot is the result of this action: In the next step, you will create a raster of visibility between two geodetic points in order to plan some topographic measurements using an electronic theodolite. You will use the TriangulationPoint and Elevation layers: In the Table Of Contents section, turn on the TriangulationPoint layer, and open its attribute table to examine the fields. There are two geodetic points with the following supplementary fields: OffsetA and OffsetB. OffsetA is the proposed height of the instrument mounted on its tripod above stations 8 and 72. OffsetB is the proposed height of the reflector (or target) above the same points. Close the table. Expand Spatial Analyst Tools | Interpolation and double-click on the Visibility tool to open the dialog box. Click on Show Help to see the meaning of every parameter. Set the following parameters: Input raster: Elevation Input point or polyline observer features: TOPO5000.gdbGeodeticPointsTriangulationPoint Output raster: ScratchTOPO5000.gdbVisibility Analysis type (optional): OBSERVERS Observer parameters | Surface offset (optional): OffsetB Observer offset (optional): OffsetA Outer radius (optional): For this, type 1600 Notice that OffsetA and OffsetB were automatically assigned. The Outer radius parameter limits the search distance, and it is the rounded distance between the two geodetic points. All other cells beyond the 1,600-meter radius will be excluded from the visibility analysis. Click on OK. Open the attribute table of the Visibility layer to inspect the fields and values. The Value field stores the value of cells. Value 0 means that cells are not visible from the two points. Value 1 means that 6,608,948 cells are visible only from point 8 (first observer OBS1). Value 2 means that 1,813,578 cells are visible only from point 72 (second observer OBS2). Value 3 means that 4,351,861 cells are visible from both points. In conclusion, there is visibility between the two points if the height of the instrument and reflector is 1.5 meters. Close the table. Symbolize the Visibility layer, as follows: Show: Unique Values and Value Field: Value. Click on Add All Values and choose Color Scheme: Yellow-Green Bright. Select <Heading> and change the Label value to Height 1.5 meters. Double-click on the symbol for Value as 0, and select No Color. Click on OK. The Visibility layer is symbolized as shown in the following screenshot: Turn off all layers except the Visibility, TriangulationPoint, and Hillshade layers. Save your map as MySurfaceAnalysis.mxd and close ArcMap. You can find the final results at <drive>:PacktPublishingDataSpatialAnalystSurfaceAnalysis. How it works... You have started the exercise by setting the geoprocessing environment. You will override those settings in the next recipes. At the application level, you chose to build pyramids. By creating pyramids, your raster will be displayed faster when you zoom out. The pyramid levels contain the copy of the original raster at a low resolution. The original raster will have a cell size of 0.5 meters. The pixel size will double at each level of the pyramid, so the first level will have a cell size of 1 meter; the second level will have a cell size of 2 meters; and the third level will have a cell size of 4 meters. Even if the values of cells refer to heights measured above the local mean sea level (zero-level surface), you should consider the planimetric accuracy of the dataset. Please remember that TOPO5000.gdb refers to a product at the scale 1:5,000. This is the reason why you have chosen 0.5 meters for the raster cell size. At step 4, you used the PointElevation layer as supplementary data when you created the Elevation raster. If one of your ArcToolbox tools fails to execute or you have obtained an empty raster output, you have some options here: Open the Results dialog from the Geoprocessing menu to explore the error report. This will help you to identify the parameter errors. Right-click on the previous execution of the tool and choose Open (step 1). Change the parameters and click on OK to run the tool. Choose Re Run if you want to run the tool with the parameters unchanged (step 2) as shown in the following screenshot: Run the ArcToolbox tool from the ArcCatalog application. Before running the tool, check the geoprocessing environment in ArcCatalog by navigating to Geoprocessing | Environments. There's more... What if you have a model with all previous steps? Open ArcCatalog, and go to ...DataSpatialAnalystModelBuilder. In the ModelBuilder folder, you have a toolbox named MyToolbox, which contains the Surface Analysis model. Right-click on the model and select Properties. Take your time to study the information from the General, Parameters, and Environments tabs. The output (derived data) will be saved in Scratch Workspace: ModelBuilder ScratchTopo5000.gdb. Click on OK to close the Surface Analysis Properties window. Running the entire model will take you around 25 minutes. You have two options: Tool dialog option: Right-click on the Surface Analysis model and select Open. Notice the model parameters that you can modify and read the Help information. Click on OK to run the model. Edit mode: Right-click on the Surface Analysis model and select Edit. The colored model elements are in the second state—they are ready to run the Surface Analysis model by using one of those two options: To run the entire model at the same time, select Run Entire Model from the Model menu. To run the tools (yellow rounded rectangle) one by one, select the Topo to Raster tool with the Select tool, and click on the Run tool from the Standard toolbar. Please remember that a shadow behind a tool means that the model element has already been run. You used the Visibility tool to check the visibility between two points with 1.5 meters for the Observer offset and Surface offset parameters. Try yourself to see what happens if the offset value is less than 1.5 meters. To again run the Visibility tool in the Edit mode, right-click on the tool, and select Open. For Surface offset and Observer offset, type 0.5 meters and click on OK to run the tool. Repeat these steps for a 1 meter offset. Interpolating data Spatial interpolation is the process of estimating an unknown value between two known values taking into account Tobler's First Law: "Everything is related to everything else, but near things are more related than distant things." This recipe does not undertake to teach you the advanced concept of interpolation because it is too complex for this book. Instead, this recipe will guide you to create a terrain surface using the following: A feature class with sample elevation points Two interpolation methods: Inverse Distance Weighted (IDW) and Spline For further research, please refer to: Geographic Information Analysis, David O'Sullivan and David Unwin, John Wiley & Sons, Inc., 2003, specifically the 8.3 Spatial interpolation recipe of Chapter 8, Describing and Analyzing Fields, pp.220-234. Getting ready In this recipe, you will create a terrain surface stored as a raster using the PointElevation sample points. Your sample data has the following characteristics: The average distance between points is 150 meters The density of sample points is not the same on the entire area of interest There are not enough points to define the cliffs and the depressions There are not extreme differences in elevation values How to do it... Follow these steps to create a terrain surface using the IDW tool: Start ArcMap and open an existing map document Interpolation.mxd from <drive>:PacktPublishingDataSpatialAnalyst. Set the geoprocessing environment, as follows: Workspace | Current Workspace: DataSpatialAnalystTOPO5000.gdb and Scratch Workspace: DataSpatialAnalystScratchTOPO5000.gdb Output Coordinates: Same as the PointElevation layer Raster Analysis | Cell Size: As Specified Below: 1 Mask: DataSpatialAnalystTOPO5000.gdbTrapezoid5k In the next two steps, you will use the IDW tool. Running IDW with barrier polyline features will take you around 15 minutes: In ArcToolbox, go to Spatial Analyst Tools | Interpolation, and double-click on the IDW tool. Click on Show Help to see the meaning of every parameter. Set the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbIDW_1 Power (optional): 0.5 Search radius (optional): Variable Search Radius Settings | Number of points: 6 Maximum distance: 500 Input barrier polyline features (optional): TOPO5000.gdbHydrographyWatercourseL Accept the default value of Output cell size (optional). Click on OK. Repeat step 3 by setting the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbIDW_2 Power (optional): 2 The rest of the parameters are the same as in step 3. Click on OK. Symbolize the IDW_1 and IDW_2 layers as follows: Show: Classified; Classification: Equal Interval: 10 classes; Color Scheme: Surface. Click on OK. You should obtain the following results: In the following steps, you will use the Spline tool to generate the terrain surface: In ArcToolbox, go to Spatial Analyst Tools | Interpolation, and double-click on the Spline tool. Set the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbSpline_Regular Spline type (optional): REGULARIZED Weight (optional): 0.1 and Number of points (optional): 6 Accept the default value of Output cell size (optional). Click on OK. Run again the Spline tool with the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbSpline_Tension Spline type (optional): TENSION Weight (optional): 0.1 and Number of points (optional): 6 Accept the default value of Output cell size (optional). Click on OK. Symbolize the Spline_Regular and Spline_Tension raster layers using the Equal Interval method classification with 10 classes and the Surface color ramp: In the next steps, you will use the Spline with Barriers tool to generate a terrain surface using an increased number of sample points. You will transform the ContourLine layer in a point feature class. You will combine those new points with features from the PointElevation layer: In ArcToolbox, go to Data Management Tools | Features, and double-click on the Feature vertices to Points tool. Set the following parameters: Input features: ContourLine Output Feature Class: TOPO5000.gdbRelief ContourLine_FeatureVertices Point type (optional): ALL Click on OK. Inspect the attribute table of the newly created layer. In the Catalog window, go to ...TOPO5000.gdbRelief and create a copy of the PointElevation feature class. Rename the new feature class as ContourAndPoint. Right-click on ContourAndPoint and select Load | Load Data. Set the following parameters from the second and fourth panels: Input data: ContourLine_FeatureVertices Target Field: Elevation Matching Source Field: Elevation Accept the default values for the rest of the parameters and click on Finish. In ArcToolbox, go to Spatial Analyst Tools | Interpolation, and double-click on the Spline with Barriers tool. Set the following parameters: Input point features: ContourAndPoint Z value field: Elevation Input barrier features (optional): TOPO5000.gdbHydrographyWatercourseA Output raster: ScratchTOPO5000.gdbSpline_WaterA Smoothing Factor (optional): 0 Accept the default value of Output cell size (optional). Click on OK. You should obtain a similar terrain surface to what's shown here: Explore the results by comparing the similarities or differences of the terrain surface between interpolated raster layers and the ContourLine vector layer. The IDW method works well with a proper density of sample points. Try to create a new surface using the IDW tool and the ContourAndPoint layer as sample points. Save your map as MyInterpolation.mxd and close ArcMap. You can find the final results at <drive>:PacktPublishingDataSpatialAnalyst Interpolation. How it works... The IDW method generated an average surface that will not cross through the known point elevation values and will not estimate the values below the minimum or above the maximum given point values. The IDW tool allows you to define polyline barriers or limits in searching sample points for interpolation. Even if the WatercourseL polyline feature classes do not have elevation values, river features can be used to interrupt the continuity of interpolated surfaces. To obtain fewer averaged estimated values (reduce the IDW smoother effect) you have to: Reduce the sample size to 6 points Choose a variable search radius Increase the power to 2 The Power option defines the influence of sample point values. This value increases with the distance. There is a disadvantage because around a few sample points, there are small areas raised above the surrounding surface or small hollows below the surrounding surface. The Spline method has generated a surface that crosses through all the known point elevation values and estimates the values below the minimum or above the maximum sample point values. Because the density of points is quite low, we reduced the sample size to 6 points and defined a variable search radius of 500 meters in order to reduce the smoothening effect. The Regularized option estimates the hills or depressions that are not cached by the sample point values. The Tension option will force the interpolated values to stay closer to the sample point values. Starting from step 12, we increased the number of sample points in order to better estimate the surface. At step 14, notice that the Spline with Barriers tool allows you to use the polygon feature class as breaks or barriers in searching sample points for interpolation. Summary In this article, we learned about the ArcGIS Spatial Analyst extension and its tools. Resources for Article:   Further resources on this subject: Posting Reviews, Ratings, and Photos [article] Enterprise Geodatabase [article] Adding Graphics to the Map [article]
Read more
  • 0
  • 0
  • 3445

article-image-creating-photo-sharing-application
Packt
16 Jan 2015
34 min read
Save for later

Creating a Photo-sharing Application

Packt
16 Jan 2015
34 min read
In this article by Rob Foster, the author of CodeIgniter Web Application Blueprints, we will create a photo-sharing application. There are quite a few image-sharing websites around at the moment. They all share roughly the same structure: the user uploads an image and that image can be shared, allowing others to view that image. Perhaps limits or constraints are placed on the viewing of an image, perhaps the image only remains viewable for a set period of time, or within set dates, but the general structure is the same. And I'm happy to announce that this project is exactly the same. We'll create an application allowing users to share pictures; these pictures are accessible from a unique URL. To make this app, we will create two controllers: one to process image uploading and one to process the viewing and displaying of images stored. We'll create a language file to store the text, allowing you to have support for multiple languages should it be needed. We'll create all the necessary view files and a model to interface with the database. In this article, we will cover: Design and wireframes Creating the database Creating the models Creating the views Creating the controllers Putting it all together So without further ado, let's get on with it. (For more resources related to this topic, see here.) Design and wireframes As always, before we start building, we should take a look at what we plan to build. First, a brief description of our intent: we plan to build an app to allow the user to upload an image. That image will be stored in a folder with a unique name. A URL will also be generated containing a unique code, and the URL and code will be assigned to that image. The image can be accessed via that URL. The idea of using a unique URL to access that image is so that we can control access to that image, such as allowing an image to be viewed only a set number of times, or for a certain period of time only. Anyway, to get a better idea of what's happening, let's take a look at the following site map: So that's the site map. The first thing to notice is how simple the site is. There are only three main areas to this project. Let's go over each item and get a brief idea of what they do: create: Imagine this as the start point. The user will be shown a simple form allowing them to upload an image. Once the user presses the Upload button, they are directed to do_upload. do_upload: The uploaded image is validated for size and file type. If it passes, then a unique eight-character string is generated. This string is then used as the name of a folder we will make. This folder is present in the main upload folder and the uploaded image is saved in it. The image details (image name, folder name, and so on) are then passed to the database model, where another unique code is generated for the image URL. This unique code, image name, and folder name are then saved to the database. The user is then presented with a message informing them that their image has been uploaded and that a URL has been created. The user is also presented with the image they have uploaded. go: This will take a URL provided by someone typing into a browser's address bar, or an img src tag, or some other method. The go item will look at the unique code in the URL, query the database to see if that code exists, and if so, fetch the folder name and image name and deliver the image back to the method that called it. Now that we have a fairly good idea of the structure and form of the site, let's take a look at the wireframes of each page. The create item The following screenshot shows a wireframe for the create item discussed in the previous section. The user is shown a simple form allowing them to upload an image. Image2 The do_upload item The following screenshot shows a wireframe from the do_upload item discussed in the previous section. The user is shown the image they have uploaded and the URL that will direct other users to that image. The go item The following screenshot shows a wireframe from the go item described in the previous section. The go controller takes the unique code in a URL, attempts to find it in the database table images, and if found, supplies the image associated with it. Only the image is supplied, not the actual HTML markup. File overview This is a relatively small project, and all in all we're only going to create seven files, which are as follows: /path/to/codeigniter/application/models/image_model.php: This provides read/write access to the images database table. This model also takes the upload information and unique folder name (which we store the uploaded image in) from the create controller and stores this to the database. /path/to/codeigniter/application/views/create/create.php: This provides us with an interface to display a form allowing the user to upload a file. This also displays any error messages to the user, such as wrong file type, file size too big, and so on. /path/to/codeigniter/application/views/create/result.php: This displays the image to the user after it has been successfully uploaded, as well as the URL required to view that image. /path/to/codeigniter/application/views/nav/top_nav.php: This provides a navigation bar at the top of the page. /path/to/codeigniter/application/controllers/create.php: This performs validation checks on the image uploaded by the user, creates a uniquely named folder to store the uploaded image, and passes this information to the model. /path/to/codeigniter/application/controllers/go.php: This performs validation checks on the URL input by the user, looks for the unique code in the URL and attempts to find this record in the database. If it is found, then it will display the image stored on disk. /path/to/codeigniter/application/language/english/en_admin_lang.php: This provides language support for the application. The file structure of the preceding seven files is as follows: application/ ├── controllers/ │   ├── create.php │   ├── go.php ├── models/ │   ├── image_model.php ├── views/create/ │   ├── create.php │   ├── result.php ├── views/nav/ │   ├── top_nav.php ├── language/english/ │   ├── en_admin_lang.php Creating the database First, we'll build the database. Copy the following MySQL code into your database: CREATE DATABASE `imagesdb`; USE `imagesdb`;   DROP TABLE IF EXISTS `images`; CREATE TABLE `images` ( `img_id` int(11) NOT NULL AUTO_INCREMENT, `img_url_code` varchar(10) NOT NULL, `img_url_created_at` timestamp NOT NULL DEFAULT     CURRENT_TIMESTAMP, `img_image_name` varchar(255) NOT NULL, `img_dir_name` varchar(8) NOT NULL, PRIMARY KEY (`img_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; Right, let's take a look at each item in every table and see what they mean: Table: images Element Description img_id This is the primary key. img_url_code This stores the unique code that we use to identify the image in the database. img_url_created_at This is the MySQL timestamp for the record. img_image_name This is the filename provided by the CodeIgniter upload functionality. img_dir_name This is the name of the directory we store the image in. We'll also need to make amends to the config/database.php file, namely setting the database access details, username, password, and so on. Open the config/database.php file and find the following lines: $db['default']['hostname'] = 'localhost'; $db['default']['username'] = 'your username'; $db['default']['password'] = 'your password'; $db['default']['database'] = 'imagesdb'; Edit the values in the preceding code ensuring you substitute those values for the ones more specific to your setup and situation—so enter your username, password, and so on. Adjusting the config.php and autoload.php files We don't actually need to adjust the config.php file in this project as we're not really using sessions or anything like that. So we don't need an encryption key or database information. So just ensure that you are not autoloading the session in the config/autoload.php file or you will get an error, as we've not set any session variables in the config/config.php file. Adjusting the routes.php file We want to redirect the user to the create controller rather than the default CodeIgniter welcome controller. To do this, we will need to amend the default controller settings in the routes.php file to reflect this. The steps are as follows: Open the config/routes.php file for editing and find the following lines (near the bottom of the file): $route['default_controller'] = "welcome"; $route['404_override'] = ''; First, we need to change the default controller. Initially, in a CodeIgniter application, the default controller is set to welcome. However, we don't need that, instead we want the default controller to be create, so find the following line: $route['default_controller'] = "welcome"; Replace it with the following lines: $route['default_controller'] = "create"; $route['404_override'] = ''; Then we need to add some rules to govern how we handle URLs coming in and form submissions. Leave a few blank lines underneath the preceding two lines of code (default controller and 404 override) and add the following three lines of code: $route['create'] = "create/index"; $route['(:any)'] = "go/index"; $route['create/do_upload'] = "create/do_upload"; Creating the model There is only one model in this project, image_model.php. It contains functions specific to creating and resetting passwords. Create the /path/to/codeigniter/application/models/image_model.php file and add the following code to it: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');   class Image_model extends CI_Model { function __construct() {    parent::__construct(); }   function save_image($data) {    do {       $img_url_code = random_string('alnum', 8);        $this->db->where('img_url_code = ', $img_url_code);      $this->db->from('images');      $num = $this->db->count_all_results();    } while ($num >= 1);      $query = "INSERT INTO `images` (`img_url_code`,       `img_image_name`, `img_dir_name`) VALUES (?,?,?) ";    $result = $this->db->query($query, array($img_url_code,       $data['image_name'], $data['img_dir_name']));      if ($result) {      return $img_url_code;    } else {      return flase;    } }   function fetch_image($img_url_code) {    $query = "SELECT * FROM `images` WHERE `img_url_code` = ? ";    $result = $this->db->query($query, array($img_url_code));      if ($result) {      return $result;    } else {      return false;    } } } There are two main functions in this model, which are as follows: save_image(): This generates a unique code that is associated with the uploaded image and saves it, with the image name and folder name, to the database. fetch_image(): This fetches an image's details from the database according to the unique code provided. Okay, let's take save_image() first. The save_image() function accepts an array from the create controller containing image_name (from the upload process) and img_dir_name (this is the folder that the image is stored in). A unique code is generated using a do…while loop as shown here: $img_url_code = random_string('alnum', 8); First a string is created, eight characters in length, containing alpha-numeric characters. The do…while loop checks to see if this code already exists in the database, generating a new code if it is already present. If it does not already exist, this code is used: do { $img_url_code = random_string('alnum', 8);   $this->db->where('img_url_code = ', $img_url_code); $this->db->from('images'); $num = $this->db->count_all_results(); } while ($num >= 1); This code and the contents of the $data array are then saved to the database using the following code: $query = "INSERT INTO `images` (`img_url_code`, `img_image_name`,   `img_dir_name`) VALUES (?,?,?) "; $result = $this->db->query($query, array($img_url_code,   $data['image_name'], $data['img_dir_name'])); The $img_url_code is returned if the INSERT operation was successful, and false if it failed. The code to achieve this is as follows: if ($result) { return $img_url_code; } else { return false; } Creating the views There are only three views in this project, which are as follows: /path/to/codeigniter/application/views/create/create.php: This displays a form to the user allowing them to upload an image. /path/to/codeigniter/application/views/create/result.php: This displays a link that the user can use to forward other people to the image, as well as the image itself. /path/to/codeigniter/application/views/nav/top_nav.php: This displays the top-level menu. In this project it's very simple, containing a project name and a link to go to the create controller. So those are our views, as I said, there are only three of them as it's a simple project. Now, let's create each view file. Create the /path/to/codeigniter/application/views/create/create.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <p><?php echo $this->lang->line('encode_instruction_1');   ?></p>   <?php echo validation_errors(); ?>   <?php if (isset($success) && $success == true) : ?> <div class="alert alert-success">    <strong><?php echo $this->lang->line('     common_form_elements_success_notifty'); ?></strong>     <?php echo $this->lang->     line('encode_encode_now_success'); ?> </div> <?php endif ; ?>   <?php if (isset($fail) && $fail == true) : ?> <div class="alert alert-danger">    <strong><?php echo $this->lang->line('     common_form_elements_error_notifty'); ?> </strong>     <?php echo $this->lang->line('encode_encode_now_error     '); ?>    <?php echo $fail ; ?> </div> <?php endif ; ?>   <?php echo form_open_multipart('create/do_upload');?> <input type="file" name="userfile" size="20" /> <br /> <input type="submit" value="upload" /> <?php echo form_close() ; ?> <br /> <?php if (isset($result) && $result == true) : ?> <div class="alert alert-info">    <strong><?php echo $this->lang->line('     encode_upload_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?> </div> <?php endif ; ?> This view file can be thought of as the main view file; it is here that the user can upload their image. Error messages are displayed here too. Create the /path/to/codeigniter/application/views/create/result.php file and add the following code to it: <div class="page-header"> <h1><?php echo $this->lang->line('system_system_name');     ?></h1> </div>   <?php if (isset($result) && $result == true) : ?>    <strong><?php echo $this->lang->line('     encode_encoded_url'); ?> </strong>    <?php echo anchor($result, $result) ; ?>    <br />    <img src="<?php echo base_url() . 'upload/' .       $img_dir_name . '/' . $file_name ;?>" /> <?php endif ; ?> This view will display the encoded image resource URL to the user (so they can copy and share it) and the actual image itself. Create the /path/to/codeigniter/application/views/nav/top_nav.php file and add the following code to it: <!-- Fixed navbar --> <div class="navbar navbar-inverse navbar-fixed-top"   role="navigation"> <div class="container">    <div class="navbar-header">      <button type="button" class="navbar-toggle" data- toggle="collapse" data-target=".navbar-collapse">        <span class="sr-only">Toggle navigation</span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>        <span class="icon-bar"></span>      </button>      <a class="navbar-brand" href="#"><?php echo $this-       >lang->line('system_system_name'); ?></a>  </div>    <div class="navbar-collapse collapse">      <ul class="nav navbar-nav">        <li class="active"><?php echo anchor('create',           'Create') ; ?></li>      </ul>    </div><!--/.nav-collapse --> </div> </div>   <div class="container theme-showcase" role="main"> This view is quite basic but still serves an important role. It displays an option to return to the index() function of the create controller. Creating the controllers We're going to create two controllers in this project, which are as follows: /path/to/codeigniter/application/controllers/create.php: This handles the creation of unique folders to store images and performs the upload of a file. /path/to/codeigniter/application/controllers/go.php: This fetches the unique code from the database, and returns any image associated with that code. These are two of our controllers for this project, let's now go ahead and create them. Create the /path/to/codeigniter/application/controllers/create.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   class Create extends MY_Controller { function __construct() {    parent::__construct();      $this->load->helper(array('string'));      $this->load->library('form_validation');      $this->load->library('image_lib');      $this->load->model('Image_model');      $this->form_validation->set_error_delimiters('<div         class="alert alert-danger">', '</div>');    }   public function index() {    $page_data = array('fail' => false,                        'success' => false);    $this->load->view('common/header');    $this->load->view('nav/top_nav');    $this->load->view('create/create', $page_data);    $this->load->view('common/footer'); }   public function do_upload() {    $upload_dir = '/filesystem/path/to/upload/folder/';    do {      // Make code      $code = random_string('alnum', 8);        // Scan upload dir for subdir with same name      // name as the code      $dirs = scandir($upload_dir);        // Look to see if there is already a      // directory with the name which we      // store in $code      if (in_array($code, $dirs)) { // Yes there is        $img_dir_name = false; // Set to false to begin again      } else { // No there isn't        $img_dir_name = $code; // This is a new name      }      } while ($img_dir_name == false);      if (!mkdir($upload_dir.$img_dir_name)) {      $page_data = array('fail' => $this->lang->       line('encode_upload_mkdir_error'),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    }      $config['upload_path'] = $upload_dir.$img_dir_name;    $config['allowed_types'] = 'gif|jpg|jpeg|png';    $config['max_size'] = '10000';    $config['max_width'] = '1024';    $config['max_height'] = '768';      $this->load->library('upload', $config);      if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload->       display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);       $this->load->view('common/footer');    } else {      $image_data = $this->upload->data();      $page_data['result'] = $this->Image_model->save_image(       array('image_name' => $image_data['file_name'],         'img_dir_name' => $img_dir_name));    $page_data['file_name'] = $image_data['file_name'];      $page_data['img_dir_name'] = $img_dir_name;        if ($page_data['result'] == false) {        // success - display image and link        $page_data = array('fail' => $this->lang->         line('encode_upload_general_error'));        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/create', $page_data);        $this->load->view('common/footer');      } else {        // success - display image and link        $this->load->view('common/header');        $this->load->view('nav/top_nav');        $this->load->view('create/result', $page_data);        $this->load->view('common/footer');      }    } } } Let's start with the index() function. The index() function sets the fail and success elements of the $page_data array to false. This will suppress any initial messages from being displayed to the user. The views are loaded, specifically the create/create.php view, which contains the image upload form's HTML markup. Once the user submits the form in create/create.php, the form will be submitted to the do_upload() function of the create controller. It is this function that will perform the task of uploading the image to the server. First off, do_upload() defines an initial location for the upload folder. This is stored in the $upload_dir variable. Next, we move into a do…while structure. It looks something like this: do { // something } while ('…a condition is not met'); So that means do something while a condition is not being met. Now with that in mind, think about our problem—we have to save the image being uploaded in a folder. That folder must have a unique name. So what we will do is generate a random string of eight alpha-numeric characters and then look to see if a folder exists with that name. Keeping that in mind, let's look at the code in detail: do { // Make code $code = random_string('alnum', 8);   // Scan uplaod dir for subdir with same name // name as the code $dirs = scandir($upload_dir);   // Look to see if there is already a // directory with the name which we // store in $code if (in_array($code, $dirs)) { // Yes there is    $img_dir_name = false; // Set to false to begin again } else { // No there isn't    $img_dir_name = $code; // This is a new name } } while ($img_dir_name == false); So we make a string of eight characters, containing only alphanumeric characters, using the following line of code: $code = random_string('alnum', 8); We then use the PHP function scandir() to look in $upload_dir. This will store all directory names in the $dirs variable, as follows: $dirs = scandir($upload_dir); We then use the PHP function in_array() to look for the value in $code in the list of directors from scandir(): If we don't find a match, then the value in $code must not be taken, so we'll go with that. If the value is found, then we set $img_dir_name to false, which is picked up by the final line of the do…while loop: ... } while ($img_dir_name == false); Anyway, now that we have our unique folder name, we'll attempt to create it. We use the PHP function mkdir(), passing to it $upload_dir concatenated with $img_dir_name. If mkdir() returns false, the form is displayed again along with the encode_upload_mkdir_error message set in the language file, as shown here: if (!mkdir($upload_dir.$img_dir_name)) { $page_data = array('fail' => $this->lang->   line('encode_upload_mkdir_error'),                      'success' => false); $this->load->view('common/header'); $this->load->view('nav/top_nav'); $this->load->view('create/create', $page_data); $this->load->view('common/footer'); } Once the folder has been made, we then set the configuration variables for the upload process, as follows: $config['upload_path'] = $upload_dir.$img_dir_name; $config['allowed_types'] = 'gif|jpg|jpeg|png'; $config['max_size'] = '10000'; $config['max_width'] = '1024'; $config['max_height'] = '768'; Here we are specifying that we only want to upload .gif, .jpg, .jpeg, and .png files. We also specify that an image cannot be above 10,000 KB in size (although you can set this to any value you wish—remember to adjust the upload_max_filesize and post_max_size PHP settings in your php.ini file if you want to have a really big file). We also set the minimum dimensions that an image must be. As with the file size, you can adjust this as you wish. We then load the upload library, passing to it the configuration settings, as shown here: $this->load->library('upload', $config); Next we will attempt to do the upload. If unsuccessful, the CodeIgniter function $this->upload->do_upload() will return false. We will look for this and reload the upload page if it does return false. We will also pass the specific error as a reason why it failed. This error is stored in the fail item of the $page_data array. This can be done as follows:    if ( ! $this->upload->do_upload()) {      $page_data = array('fail' => $this->upload-       >display_errors(),                          'success' => false);      $this->load->view('common/header');      $this->load->view('nav/top_nav');      $this->load->view('create/create', $page_data);      $this->load->view('common/footer');    } else { ... If, however, it did not fail, we grab the information generated by CodeIgniter from the upload. We'll store this in the $image_data array, as follows: $image_data = $this->upload->data(); Then we try to store a record of the upload in the database. We call the save_image function of Image_model, passing to it file_name from the $image_data array, as well as $img_dir_name, as shown here: $page_data['result'] = $this->Image_model-> save_image(array('image_name' => $image_data['file_name'],   'img_dir_name' => $img_dir_name)); We then test for the return value of the save_image() function; if it is successful, then Image_model will return the unique URL code generated in the model. If it is unsuccessful, then Image_model will return the Boolean false. If false is returned, then the form is loaded with a general error. If successful, then the create/result.php view file is loaded. We pass to it the unique URL code (for the link the user needs), and the folder name and image name, necessary to display the image correctly. Create the /path/to/codeigniter/application/controllers/go.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access allowed'); class Go extends MY_Controller {function __construct() {parent::__construct();   $this->load->helper('string');} public function index() {   if (!$this->uri->segment(1)) {     redirect (base_url());   } else {     $image_code = $this->uri->segment(1);     $this->load->model('Image_model');     $query = $this->Image_model->fetch_image($image_code);      if ($query->num_rows() == 1) {       foreach ($query->result() as $row) {         $img_image_name = $row->img_image_name;         $img_dir_name = $row->img_dir_name;       }          $url_address = base_url() . 'upload/' . $img_dir_name .'/' . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } The go controller has only one main function, index(). It is called when a user clicks on a URL or a URL is called (perhaps as the src value of an HTML img tag). Here we grab the unique code generated and assigned to an image when it was uploaded in the create controller. This code is in the first value of the URI. Usually it would occupy the third parameter—with the first and second parameters normally being used to specify the controller and controller function respectively. However, we have changed this behavior using CodeIgniter routing. This is explained fully in the Adjusting the routes.php file section of this article. Once we have the unique code, we pass it to the fetch_image() function of Image_model: $image_code = $this->uri->segment(1); $this->load->model('Image_model'); $query = $this->Image_model->fetch_image($image_code); We test for what is returned. We ask if the number of rows returned equals exactly 1. If not, we will then redirect to the create controller. Perhaps you may not want to do this. Perhaps you may want to do nothing if the number of rows returned does not equal 1. For example, if the image requested is in an HTML img tag, then if an image is not found a redirect may send someone away from the site they're viewing to the upload page of this project—something you might not want to happen. If you want to remove this functionality, remove the following lines in bold from the code excerpt: ....        $img_dir_name = $row->img_dir_name;        }          $url_address = base_url() . 'upload/' . $img_dir_name .'/'           . $img_image_name;        redirect (prep_url($url_address));      } else {        redirect('create');      }    } } } .... Anyway, if the returned value is exactly 1, then we'll loop over the returned database object and find img_image_name and img_dir_name, which we'll need to locate the image in the upload folder on the disk. This can be done as follows: foreach ($query->result() as $row) { $img_image_name = $row->img_image_name; $img_dir_name = $row->img_dir_name; } We then build the address of the image file and redirect the browser to it, as follows: $url_address = base_url() . 'upload/' . $img_dir_name .'/'   . $img_image_name; redirect (prep_url($url_address)); Creating the language file We make use of the language file to serve text to users. In this way, you can enable multiple region/multiple language support. Create the /path/to/codeigniter/application/language/english/en_admin_lang.php file and add the following code to it: <?php if (!defined('BASEPATH')) exit('No direct script access   allowed');   // General $lang['system_system_name'] = "Image Share";   // Upload $lang['encode_instruction_1'] = "Upload your image to share it"; $lang['encode_upload_now'] = "Share Now"; $lang['encode_upload_now_success'] = "Your image was uploaded, you   can share it with this URL"; $lang['encode_upload_url'] = "Hey look at this, here's your   image:"; $lang['encode_upload_mkdir_error'] = "Cannot make temp folder"; $lang['encode_upload_general_error'] = "The Image cannot be saved   at this time"; Putting it all together Let's look at how the user uploads an image. The following is the sequence of events: CodeIgniter looks in the routes.php config file and finds the following line: $route['create'] = "create/index"; It directs the request to the create controller's index() function. The index() function loads the create/create.php view file that displays the upload form to the user. The user clicks on the Choose file button, navigates to the image file they wish to upload, and selects it. The user presses the Upload button and the form is submitted to the create controller's index() function. The index() function creates a folder in the main upload directory to store the image in, then does the actual upload. On a successful upload, index() sends the details of the upload (the new folder name and image name) to the save_image() model function. The save_model() function also creates a unique code and saves it in the images table along with the folder name and image name passed to it by the create controller. The unique code generated during the database insert is then returned to the controller and passed to the result view, where it will form part of a success message to the user. Now, let's see how an image is viewed (or fetched). The following is the sequence of events: A URL with the syntax www.domain.com/226KgfYH comes into the application—either when someone clicks on a link or some other call (<img src="">). CodeIgniter looks in the routes.php config file and finds the following line: $route['(:any)'] = "go/index"; As the incoming request does not match the other two routes, the preceding route is the one CodeIgniter applies to this request. The go controller is called and the code of 226KgfYH is passed to it as the 1st segment of uri. The go controller passes this to the fetch_image() function of the Image_model.php file. The fetch_image() function will attempt to find a matching record in the database. If found, it returns the folder name marking the saved location of the image, and its filename. This is returned and the path to that image is built. CodeIgniter then redirects the user to that image, that is, supplies that image resource to the user that requested it. Summary So here we have a basic image sharing application. It is capable of accepting a variety of images and assigning them to records in a database and unique folders in the filesystem. This is interesting as it leaves things open to you to improve on. For example, you can do the following: You can add limits on views. As the image record is stored in the database, you could adapt the database. Adding two columns called img_count and img_count_limit, you could allow a user to set a limit for the number of views per image and stop providing that image when that limit is met. You can limit views by date. Similar to the preceding point, but you could limit image views to set dates. You can have different URLs for different dimensions. You could add functionality to make several dimensions of image based on the initial upload, offering several different URLs for different image dimensions. You can report abuse. You could add an option allowing viewers of images to report unsavory images that might be uploaded. You can have terms of service. If you are planning on offering this type of application as an actual web service that members of the public could use, then I strongly recommend you add a terms of service document, perhaps even require that people agree to terms before they upload an image. In those terms, you'll want to mention that in order for someone to use the service, they first have to agree that they do not upload and share any images that could be considered illegal. You should also mention that you'll cooperate with any court if information is requested of you. You really don't want to get into trouble for owning or running a web service that stores unpleasant images; as much as possible you want to make your limits of liability clear and emphasize that it is the uploader who has provided the images. Resources for Article: Further resources on this subject: UCodeIgniter MVC – The Power of Simplicity! [article] Navigating Your Site using CodeIgniter 1.7: Part 1 [article] Navigating Your Site using CodeIgniter 1.7: Part 2 [article]
Read more
  • 0
  • 0
  • 3671

article-image-part1-learning-aws-cli
Yohei Yoshimuta
15 Jan 2015
4 min read
Save for later

Part1. Learning AWS CLI

Yohei Yoshimuta
15 Jan 2015
4 min read
As an application developer, you must be familiar with the CLI. Using the CLI (instead of UI) has the benefit that the operations can be documented and then they become reproducible and shareable. Fortunately, AWS provides both API and the unified CLI tool named aws-cli. You must use and understand AWS CLI, especially when you want to control anything, AWS UI doesn't provide yet; for example, Scheduled Scaling - Auto Scaling can be available only via AWS CLI. Before explaining the full process, I will assume that you are using AWS VPC & S3. Also, ensure to have all of your network resources like security group inside VPC, and you know an access key and a secret key of your own AWS account or IAM account. Let's see how we can control EC2 instances and S3 : Install aws-cli package The first thing you need to do is to install aws-cli package on your machine. # Install pip if your machine doesn't have pip yet $ sudo easy_install pip # Install awscli with pip $ sudo pip install awscli # Configure AWS credential and config $ aws configure AWS Access Key ID: foo AWS Secret Access Key: bar Default region name [us-west-2]: us-west-2 Default output format [None]: json Note: You have to configure AWS Access Key ID and Secret Access Key to which an IAM account is attached by necessary but minimum policies. For now, I recommend you create an IAM account attached AmazonEC2FullAccess-AMI-201412181939 and AmazonS3FullAccess-AMI-201502041017. # AmazonEC2FullAccess-AMI-201412181939 { "Version": "2012-10-17", "Statement": [ { "Action": "ec2:*", "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": "elasticloadbalancing:*", "Resource": "*" }, { "Effect": "Allow", "Action": "cloudwatch:*", "Resource": "*" }, { "Effect": "Allow", "Action": "autoscaling:*", "Resource": "*" } ] } # AmazonS3FullAccess-AMI-201502041017 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:*", "Resource": "*" } ] } Run an EC2 instance Okay, you are ready to control AWS resources via CLI. The most important thing to run an EC2 instance is preparing option parameters. You can confirm these details in run-instances — AWS CLI documentation. This command generates a JSON file which has skeleton option parameters: $ aws ec2 run-instances --generate-cli-skeleton > /tmp/run-instances_base.json # We overwrite this skeleton file to be shorter and easier to understand $ vi /tmp/run-instances_base.json $ cat /tmp/run-instances_base.json { "ImageId": "ami-936d9d93", "KeyName": "YOUR Key pair name", "InstanceType": "t2.micro", "Placement": { "AvailabilityZone": "us-west-2" }, "NetworkInterfaces": [ { "DeviceIndex": 0, "SubnetId": "subnet-***", "Groups": [ "sg-***" ], "DeleteOnTermination": true, "AssociatePublicIpAddress": true } ] } # Run an instance $ aws ec2 run-instances --cli-input-json file:///tmp/run-instances_base.json List running EC2 instances Now confirm your running EC2 instances. The detail of using the command is here : describe-instances — AWS CLI documentation. I recommend you use the jq tool because the output is formatted as JSON and you might be overwhelmed by its volume. You can install jq via brew or make the tool. # Install jq if your machine doesn't have it yet and you want to use it on MacOSX $ brew install jq # List EC2 instances $ aws ec2 describe-instances | jq -r '.Reservations[].Instances[] | [.LaunchTime, .State.Name, .InstanceId, .InstanceType, .PrivateIpAddress, (.Tags[] | select(.Key=="Name").Value)] | join("t")' 2015-09-22T10:16:41.000Z running i-f19f6e54 t2.micro 10.0.1.61 Terminate an EC2 instance Well, it's time to terminate an EC2 instance to save money. The detail of using the command is here: terminate-instances — AWS CLI documentation # DryRun the command $ aws ec2 terminate-instances --instance-ids i-f19f6e54 --dry-run # Terminate an EC2 instance $ aws ec2 terminate-instances --instance-ids i-f19f6e54 List S3 directory contents You want to find and grep AWS ELB access logs, especially if you are an operations engineer and have some problems. To start, find the specific file. The detail of using the command is here: ls — AWS CLI documentation. # List ELB access logs created at 2015/09/18 $ aws s3 ls s3://example-elb-log/example-app-elb/AWSLogs/717669809617/elasticloadbalancing/us-west-2/2015/09/18/ Download a S3 content Then you can download a concerned file and grep with a specific keyword. The detail of using the command is here: cp — AWS CLI documentation. # Find access logs whose SSL cipher are ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 $ aws s3 cp s3://example-elb-log/example-app-elb/AWSLogs/717669809617/elasticloadbalancing/us-west-2/2015/09/18/717669809617_elasticloadbalancing_us-west-2_example-app-elb_20150918T0230Z_54.92.79.213_5wo8k1of.log - | head -n 1000 | grep 'ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2' Conclusion AWS CLI is a very useful tool. It supports extensive and important services; for example, I recently upgraded an SSL certificate of ELB from a SHA-1 signed to a SHA-2 before the iOS9 release due to iOS9 ATS. During these operations, I received a peer review for the planned aws-cli commands asynchronously. It's one of AWS CLI's benefits. About the author Yohei Yoshimuta is a software engineer with a proven record of delivering high quality software in both game and advertising industries. He has extensive experience building products from scratch in both small and large teams. His primary focuses are Perl, Go, and AWS technologies. You can reach him at @yoheimuta on GitHub and Twitter.
Read more
  • 0
  • 0
  • 19705

article-image-getting-started-firebase
Asbjørn Enge
13 Jan 2015
5 min read
Save for later

Getting started with Firebase

Asbjørn Enge
13 Jan 2015
5 min read
Firebase is a real-time database for your web or mobile app. Data in your Firebase is stored as JSON and synchronized in real time to every connected client. This makes it really easy to share a state between clients. Firebase is a great fit for cross-platform applications and especially collaborative applications. However, you can use it for almost anything. Firebase is a NoSQL database. This is a great fit for many web and mobile applications, but as with any NoSQL store, we need to be mindful when structuring related data. Firebase is also SSL by default, which is great! In this post, we will take a look at building a basic web application using Firebase as our backend. Create a project Our app will be an SPA (Single Page Application) and we will leverage modern JavaScript tooling. We will be using npm as a package manager, babel for ES2015+ transpilation, and browserify for bundling together our scripts. Let's get started. Use the following command to create a folder: $ mkdir fbapp && cd fbapp $ npm init $ npm install --save-dev budo $ npm install --save-dev babelify We have now made a folder for our app fbapp. Inside there we have created a new project using npm init (defaults are fine) and we have installed some packages. I didn't mention budo before, but it is a great browserify development server. Set up Firebase Before we can start talking with our Firebase, we need to head over to https://www.firebase.com/ and sign up for an account. No credit card required. Firebases have quite a bit of free storage and data transfer before they start charging, so it's great for getting started. Once you are logged in, create a new Firebase. Notice the APP URL https://<name>.firebaseio.com/ depending on what you named your Firebase. We are going to use that url in a second. Click your newly created firebase will navigate to the administration UI for this Firebase. Connect to Firebase Now that we have created a Firebase, it's time to start talking to it. First, we need to install the firebase library using npm: $ npm install --save firebase $ touch index.js I also created a file named index.js. This will be the entrypoint for our app. Open index.js in your favorite editor. It's time to hack: import Firebase from 'firebase/lib/firebase-web' let ref = new Firebase('https://<name>.firebaseio.com/') ref.set({ counter : 0 }) console.log('wuhu') The firebase npm package includes both node.js and browser libraries for Firebase. Since we are going to use our app in the browser, we need to import those libraries. Now we can run that application in a browser using budo (browserify under the hood). budo index.js --live -- -t babelify Navigate to http://localhost:9966/ and verify that you can see wuhu printed in the console (developer tools). Notice the --live parameter we pass to budo. It automatically enables livereload for the bundle. If you're not familiar with it, get familiar with it! Next, try and navigate to the Firebase admin UI and verify that it now has a property counter set to 0. Reading from Firebase Now, let's try to read this data back. Since Firebase is a real-time database it pushes data to us. So, we need to set up listeners. Also, remember to remove the ref.set line from your code as it would reset this property every time someone loads our page and we do not want that. import Firebase from 'firebase/lib/firebase-web' let ref = new Firebase('https://<name>.firebaseio.com/') ref.child('counter').on('value', (snap) => { console.log(snap.val()) }) Navigate to http://localhost:9966 and verify that 0 is printed in the console. Making it useful Now, let's see if we can make this counter "useful": import Firebase from 'firebase/lib/firebase-web' let ref = new Firebase('https://getting-started.firebaseio.com/') let counter = 0 ref.child('counter').on('value', (snap) => { counter = snap.val() render(counter) }) let render = (counter) => { if (!button) createNodes() else buttonText.nodeValue = "Click Me ("+counter+")" } let increaseCounter = () => { ref.child('counter').set(counter+1) } let button, buttonText; let createNodes = () => { button = document.createElement('button') buttonText = document.createTextNode("Click Me") button.appendChild(buttonText) button.addEventListener('click', increaseCounter) document.body.appendChild(button) } render() Notice the render and increaseCounter functions. Whenever the counter property gets updated on Firebase, the render function is called. Whenever we click on the button increaseCounter is called, setting the value on Firebase which in turn triggers render. This is a typical way of working with Firebase. You use it both as a database and as an event source. Navigate your browser to http://localhost:9966 again. Click the button a few times and watch the counter increase. Then, open another browser window and navigate to the same URL. Have the browser windows side-by-side so you can see them both at the same time, then click one of the buttons. It's a little bit like magic isn't it? Once you get familiar with the way Firebase works it is a really great store for your data. It encourages event driven design and functional code. It frees you from manually coordinating state between different parts of your application. Firebase holds your state, so all you have to do is listen. About the author Asbjørn Enge is a software enthusiast living in Sandes, Norway. He is passionate about free software and the web. His interests includes the engineering and designing of web clients, APIs, and server side services, as well as cloud architecture and devops. He cares about modular design, simplicity and readability. He can be found on Twitter @asbjornenge.
Read more
  • 0
  • 0
  • 1846
article-image-getting-started-electronic-projects
Packt
13 Jan 2015
7 min read
Save for later

Getting Started with Electronic Projects

Packt
13 Jan 2015
7 min read
Welcome to my second book produced by the good folks at Packt Publishing LLC. This book is somewhat different from my other book in that, instead of one large project this book is a collection of several small, medium, and large projects. While the name of the book is called Getting Started with Electronics Projects, I convinced the folks at Packt to let me write a book with several projects for the electronics hacker and experimenter groups. The first few projects do not even involve a BeagleBone, something which had my reviewers shaking their heads at first. So what follows is a brief taste of what you can look forward to, in this book. Before we go any further I should explain who this book is for. If you are a software person who has never heated up a soldering iron before, you might want to practice a bit before attempting the more difficult assembly (electronics assembly, not assembly language programming) projects. If you are a hardware guy, who just wants it to work out of the box, then I suggest you download the image and burn yourself a microSD card. If you feel adventurous, you can always play with the code sections. If you succeed in giving the Kernel a heart attack( also known as Kernel Panic), no worries. Just burn the image again. The book is divided into eight chapters and seven different projects. The first four don't involve a BeagleBone at all. (For more resources related to this topic, see here.) Chapter 1 – Introduction – Our First Project This chapter is for the hardware guys and the adventurous programmers. In this chapter, you will build your own infrared flashlight. If you can use a soldering iron and a solder sucker you can build this project. IR flashlight Chapter 2 – Infrared Beacon In this chapter, we continue with the theme of infrared devices, by building a somewhat more challenging project from a construction prospective. Files for the PCB are available for download from the Packt site, if you bought the book of course. What this beacon does is flash two infrared LED's on and off at a rate that can be selected by the builder. The beacon is only visible when viewed through night-vision goggles or on a black-and-white video camera. IR beacon While it may not be obvious from the preceding image, the case is actually made from ABS water pipe I purchased from a local hardware store. I like ABS pipe because it is so easy to work with. Chapter 3 – Motion Alarm Once again we will be using ABS pipe to construct a cool project. This time we will be building a motion sensor. Most alarm sensors use some sort of Passive Infrared (PIR) sensor or a millimetre wave radar to detect motion. This project uses a simple (cheap) mercury switch to detect motion. How you reset the alarm is a carefully guarded secret, so you will have to by the book to learn the secret! Motion sensor Notice the ring at the right end of the tube? That is so you can hang it up like a Christmas ornament! As with the last chapter, the PCB files are available for download from the Packt site. Chapter 4 – Sound Card-based Oscilloscope This chapter uses a USB soundcard connected to a PC, because the software I found appears to only run on a PC. If you can find a MAC version of the software, go for it. This project will work for MAC or Linux users too. By the way, I tested all of the software in this chapter on a Pentium 4 class machine running Windows XP, so here is an opportunity to recycle/repurpose that old PC you were going to junk! Soundblaster oscilloscope The title of the chapter is somewhat misleading, because the project also includes plans for building a sound card-based audio signal generator. There are a number of commercial and freeware versions of software that take advantage of this hardware. Soundblaster software on PC There are a number of commercial software packages that have a freeware version available for download. The preceding screenshot shows one of the better ones I found running under Windows XP. Chapter 5 – Calibrated RF Source In this chapter we will be building a clean calibrated RF signal source. In addition to being of use to ham radio enthusiasts, it will also be used the chapters that follow. Clean 50MHzsignal This is the first project that actually makes use of the BeagleBone Black. The BeagleBone is used to control a digitally controlled step attenuator. This allows us to output a calibrated signal level from our 50MHz source. In addition to its use in upcoming chapters, once again ham radio enthusiasts will no doubt find a clean RF source with a calibrated output which is selectable in .5dB steps. GUI running on BeagleBone Black Chapter 6 – RF Power Meter – Hardware In this chapter we will be building and RF power meter capable of measuring RF power from 40MHz to over 6GHz. The circuit is based on the Linear Technology LTC5582 RMS power detector. The beauty of this device is that it outputs a DC voltage proportional to the RMS power it detects. There is no need for conversion as there is with other detectors. RMS power is the AC power measured by your digital voltmeter when you have it set to AC. RF detector mounter on protoboard The connector near the notch in the protoboard allows the BeagleBone to both read the RF power and control the step attenuator mentioned earlier. Chapter 7 – RF Power Meter – Software In this chapter we will be building a development system based on Ubuntu and using a docking station available from https://specialcomp.com/beaglebone/index.htm This could be considered the "deluxe" version. It is also possible to complete the next two chapters using the debug port on the BeagleBone and a communications program like PuTTY. BeagleBone development system This configuration also contains the hardware to build a combination wired and wireless alarm system. More on that is in the following chapter. Chapter 8 – Creating a ZigBee Network of Sensors This is the longest and by far the most complex chapter in the book. In this chapter we will learn how to configure XBee modules from Digi International Inc. using the XCTU Windows application. We will then build a standalone wireless alarm system. This alarm system will be based on hardware developed and presented in my previous book: http://www.packtpub.com/building-a-home-security-system-with-beaglebone/book If you purchased my previous book and build any of the alarm system hardware, you can also use it in this chapter to convert your wired alarm system to wireless! The following image is of the XBee module mounted on top of the alarm boards. Each wireless remote module has two alarm zone inputs and four isolated alarm outputs. Completed wireless alarm remote module Summary This book will hopefully have something of interest to a large variety of electronics enthusiasts, from hams to hackers. I would say that, as long as you have at least intermediate programming and construction skills, you should have no problem completing the projects in this book. All the projects use through-hole parts to make assembly easier. Resources for Article: Further resources on this subject: Building robots that can walk [article] Beagle Boards [article] Protecting GPG Keys in BeagleBone [article]
Read more
  • 0
  • 0
  • 21943

article-image-particle-core-powered-laser-tank-system
Pawel Szymczykowski
12 Jan 2015
14 min read
Save for later

Build your own Particle Core Powered Laser Tank

Pawel Szymczykowski
12 Jan 2015
14 min read
Laser Tanks Recently, we had the pleasure of interviewing a group of summer intern candidates at Zappos, and we were in need of a fun and quick coding challenge. I set them each to the task of developing a robot program for RoboCode to battle it out on the virtual arena. RoboCode is an open source programming game in which participants write a program to control an autonomous robot tank. Each tank is equipped with a cannon, a radar scanner that can detect other tanks, and wheels to move around on the arena. You write to an interface in Java or .NET, and control an event loop for default behaviors like moving around and looking for enemies, and various triggers for events like scanning another tank, getting shot or hitting a wall. This all happens in a software simulation. The challenge turned out to be a lot of fun for the candidates as well as the spectators, but I couldn't help but think about how much more fun it would be with real robot tanks. My hobby projects in educational robotics and irrational enthusiasm made me think that this was not only possible, but fairly easily done. Here is a sample RoboCode program: public class SampleRobot extends Robot { public void run() { while(true) { ahead(100); turnRight(30); fire(1); turnRight(30); fire(1); turnRight(30); fire(1); } } public void onHitByBullet(HitByBulletEvent e) { turnLeft(90); ahead(100); } } In that code, the tank goes forward for 100 units, turns 30 degrees and fires, then turns 30 degrees and fires, and turns 30 degrees one last time (for a total of 90 degrees) and fires one last time before repeating its behavior. When it is hit by a bullet it turns 90 degrees to the left and moves ahead in order to break its pattern and hopefully avoid more bullets. In this build, we're going to attempt to replicate this functionality in hardware. Since a single tank isn't going to be that much fun, you might want to pull a friend or family member in to build a pair together. Let's take a look at the parts we'll need for this project: Qty Item Source 1 Particle Core https://www.Particle.io/ 2 Continuous Rotation Servo Motors Pololu 1 Photo Resistor Adafruit 1 Laser Diode eBay 2 10k Ohm Resistor Adafruit 1 2N222A Transistor Adafruit 1 Ping Pong Ball Anywhere Brains For the brains of our laser tank, we'll use the Particle Core, a wifi capable Arduino compatible microcontroller. It's currently one of my favorite boards for prototyping because it comes in a breadboard friendly form factor and can be flashed wirelessly without need for a USB cable. If you've never used your Particle Core before, you'll have to register it and tell it about your WiFi internet connection. To do that, you can use their iPhone or Android app. If you are comfortable with the command line, you can also connect it via USB and use the 'Particle-cli' npm package to do the same thing more expediently. You should follow the 'getting started' tutorial here: http://docs.Particle.io/start/ to get set up quickly. Once you've registered your Particle Core, you'll be writing and uploading your code in their web based IDE. Movement First we'll need a movable tank base. A two wheeled design that uses differential steering with a ball caster or skid for is popular and very easy to implement. I'll be using a laser cut sumobot kit for our base. You can get the files to laser cut or 3D print from http://sumobotkit.com, but if you don't have access to a 3D printer or laser cutter, you can also just use any old box and a pair of wheels that you find. A body made out of Lego® bricks would work fantastically. Standard servo motors can only rotate between a fixed range of degrees, so make sure you have continuous rotation servo motos that can rotate well, continuously. We use continuous rotation servo motors because they are easy to control without any special motor control boards. We'll wire up the red (+) and black (-) wires to the 4xAA battery pack, and then run each signal wire to a PWM pin on the Particle Core. PWM stands for Pulse Width Modulation and is a method of controlling an electronic component by sending it instructions encoded as variable width pulses of electricity. Not all pins are PWM capable, but on the Particle Core, A0 and A1 are both PWM pins. Motor Wiring With all of our connections made, moving our tank is a simple matter. We just need to remember that since our motors are mirrored, we'll need to move one of them clockwise and the other counter-clockwise to achieve forward motion. Reverse the directions of the motors to go in reverse, and turn them both in the same direction to turn right or left. In normal servos, you can specify a position in degrees to move the servo to. Continuous rotation servos have the hardware to tell it when to stop removed, so they behave a little differently. 90 degrees is stopped, 0 degrees it full reverse, and 180 degrees is full speed ahead. This also works for any number in between - for example, 45 degrees is half speed reverse. Servo left; Servo right; void setup() { left.attach(A0); right.attach(A1); } void ahead(int duration) { left.write(180); right.write(0); delay( duration * 10 ); left.write(90); right.write(90); } void back(int duration) { left.write(0); right.write(180); delay( duration * 10 ); left.write(90); right.write(90); } In RoboCode, you can specify a distance in some arbitrary unit of distance. However, we can't accurately specify a distance to move with continuous rotation servos without installing an optical encoder that measures how fast the wheel is turning. Instead of complicating our build with an encoder, we'll just cheat a little and make our calls time-based instead. The distance your servo will move in a given slice of time will vary with your specific model of servo motor, voltage, and wheel size, but with a little trial and error we can tune it in accurately enough. My numbers are for Spring RC SM-S4303R servos and 50mm wheels. // How long it takes to turn 90 degrees const int ninetyDegrees = 650; void turnRight(int degrees) { left.write(180); right.write(180); delay( ( degrees / 90 ) * ninetyDegrees ); left.write(90); right.write(90); runOnHitCode(); } void turnLeft(int degrees) { left.write(0); right.write(0); delay( ( degrees / 90 ) * ninetyDegrees ); left.write(90); right.write(90); } Shooting Commercial Laser Tag systems use infrared beams for safety and practicality, but I personally think that using real lasers would be a lot more fun. Since our laser tank will be fairly low to the ground, under 5mw, and because I trust you not to shine the laser directly into an eyeball, I think we're OK. Red laser tubes are extremely inexpensive on eBay. I bought a 10 pack of 5 volt, 5 milliwatt lasers for about $5 shipped, but I've seen them recently for even less than that. The laser tube is pretty simple to connect: run the blue wire to ground, and the red wire to a 5 volt power source and it will fire. On a standard Arduino like the Uno R3, you could wire it up to any pin and trigger it by setting the pin to HIGH. This is also mostly true of the Particle Core, but it will be underpowered because the logic level of the Particle Core is 3.3v instead of the 5v of the Uno. Thus, we are using a transistor to boost the signal strength to 6v from the battery pack. Laser Wiring Now to shoot, we just set the pin to high for a little bit, say 500 milliseconds. int laser = A3; void setup() { pinMode(laser, OUTPUT); } void fire(int power) { digitalWrite(laser, HIGH); delay(500); digitalWrite(laser, LOW); } Getting Shot Awesome! We now have a roving robot that can roll around and shoot at things! But how do we detect if it's been shot? The answer is a photoresistor. A photoresistor changes resistance in relation to surrounding light levels. Unfortunately, the sensitive area of the photo resistor is just under one millimeter wide. That is a tiny target! That is why we'll need a diffuser of some sort, and a fairly large one. If we drill a small hole in a ping-pong ball and insert the photoresistor, it will bring the ambient light level around the photoresistor down. Then, when a laser beam hits the ball, the thin shell will illuminate and the light level inside of the ball will shoot up! That is how we'll detect a shot. We connect the photoresistor to a 3.3v line and to ground in series with a 10k ohm resistor, acting as what we call a pull down resistor. Then we connect an analog pin (A2) between the pull down resistor and the photoresistor. If we only connected the photoresistor directly to the analog pin, the impedence would be too high and no current would flow through the photoresistor. Adding the resistor provides a path to ground the pulls the voltage down to ground and ensures there is always current flowing. The analog pin just 'observes' the voltage of the current flowing by like a water wheel. Without the pulldown resistor, it's more like a wooden dam. Detector Wiring Then, we just need to watch the reading of the analog pin. If it drops below 600, that means the inside of the ball is pretty bright, and we are likely shot. int photoCell = A2; void setup() { pinMode(photoCell, INPUT); attachInterrupt(photoCell, checkHitStatus, CHANGE); } void checkHitStatus() { lightLevel = analogRead(photoCell); if ( lightLevel < lightThreshold ) { gotHit = true; } } Here we have set up what's called an interrupt. Because we are using delays for our motor movement and laser shooting, it 'blocks' the program or keeps it from doing anything else useful while we wait for the delay to end. If a laser were to hit our ping pong ball during a delay, we wouldn't be able to detect it. An interrupt monitors a pin for a change and calls a function. Since this function is executed very often, we want to keep it small and fast. In our checkHitStatus function, we just get the exact reading from the pin and set a global variable signifying that we've been hit if it's passed the threshold we specified. Keeping Score Finally, we need a way to tell how many times we've been hit and keep track of 'hit points'. We can do this really simply by connecting three LEDs between pins D5, D6, D7 and ground. The LEDs will stay off by default, and then light up each time the tank is hit. When all three are lit up, we will halt the program and blink the LEDs. Game over, you lose. You can reset to the game's beginning state by hitting the reset button on the Particle Core. Score Wiring int led1 = D5; int led2 = D6; int led3 = D7; void setup() { pinMode(led1, OUTPUT); pinMode(led2, OUTPUT); pinMode(led3, OUTPUT); } void runOnHitCode() { if ( gotHit ) { hitCount++; if ( hitCount == 1 ) digitalWrite(led1, HIGH); if ( hitCount == 2 ) digitalWrite(led2, HIGH); if ( hitCount == 3 ) digitalWrite(led3, HIGH); onHitByBullet(); gotHit = false; } } Here we are checking for the gotHit variable and returning if it's not set. If it is set, we increase the number of times we were hit, light the appropriate LED, run the onHitByBullet() function, and reset gotHit so we can get hit again. We want to react to getting hit quickly, so we'll run runOnHitCode() after ever action by inserting it into the ahead, fire, turnLeft and turnRight functions. Putting Everything Together When you are done, your board should look something like this. Note that there are some components hidden below the ping pong ball: Top View I used a 3D printed holder for the ping pong ball and laser, and you can download the STL file here. The holder isn't necessary, as long as you mount the laser in the approximate middle of the ping pong ball and at the same height as any other tanks that you will play against. You might also want to mask out the back of your laser diode with black electrical tape so that any back reflection doesn't accidentally trigger your own hit counter. Let's take a look at the combined code: Servo left; Servo right; int photoCell = A2; int laser = A3; int led1 = D5; int led2 = D6; int led3 = D7; volatile int gotHit = false; volatile int hitCount = 0; volatile int lightLevel = 0; // How long it takes to turn 90 degrees const int ninetyDegrees = 650; // Goes lower with more light - when to trigger a 'hit' const int lightThreshold = 1000; void setup() { pinMode(photoCell, INPUT); attachInterrupt(photoCell, checkHitStatus, CHANGE); pinMode(laser, OUTPUT); pinMode(led1, OUTPUT); pinMode(led2, OUTPUT); pinMode(led3, OUTPUT); left.attach(A0); right.attach(A1); // Set a variable on the Particle API for debugging Particle.variable("lightLevel", &lightLevel, INT); } void checkHitStatus() { lightLevel = analogRead(photoCell); if ( lightLevel < lightThreshold ) { gotHit = true; } } void loop() { if ( hitCount < 3 ) { ahead(100); turnRight(30); fire(1); turnRight(30); fire(1); turnRight(30); fire(1); delay(500); } } void runOnHitCode() { if ( gotHit ) { hitCount++; if ( hitCount == 1 ) digitalWrite(led1, HIGH); if ( hitCount == 2 ) digitalWrite(led2, HIGH); if ( hitCount == 3 ) digitalWrite(led3, HIGH); onHitByBullet(); gotHit = false; } } void ahead(int duration) { left.write(180); right.write(0); delay( duration * 10 ); left.write(90); right.write(90); runOnHitCode(); } void turnRight(int degrees) { left.write(180); right.write(180); delay( ( degrees / 90 ) * ninetyDegrees ); left.write(90); right.write(90); runOnHitCode(); } void turnLeft(int degrees) { left.write(0); right.write(0); delay( ( degrees / 90 ) * ninetyDegrees ); left.write(90); right.write(90); runOnHitCode(); } void fire(int power) { digitalWrite(laser, HIGH); delay(500); digitalWrite(laser, LOW); runOnHitCode(); } void onHitByBullet() { turnLeft(90); ahead(100); } Our code has a few more extra things in it than the original RoboCode example, but the user serviceable part of the API is very similar! Battling There's not much left to do but to get two tanks built and have them battle it out on a flat, smooth surface. The tanks will move around on the playing field and shoot according to the actions you and your opponent programmed in to the main run loop. When one of them hits the other, an LED will light up, and when all three are lit the opponent's tank will stop dead. You can vary the starting positions as the tanks will interact in different ways depending on where they started. You can and should also compete by modifying the runtime code to find the most optimal way to take out your opponent's tank. You don't have to stick to a specific pattern. You can also add some entropy to make your tank move less predictably by using the rand() function like so: // Turn randomly between 0 and 99 degrees turnLeft( rand() % 100 ); Summary If you'd like to have a little more fun, you can set down a few bowls of dry ice and water on the perimeter of your playing field to create a layer of fog that will make the red laser beams visible! Of course these tanks are still shooting a little blindly. In real RoboCode, there is a scanner that is able to detect enemy tanks so as not to waste bullets. The gun can also move independently of the tank's body on a rotatable turret and the tank can avoid obstacles. It would take another article of this size to get into the specifics of all that, but in the mean time I challenge you to think about how it might be done. About the author Pawel Szymczykowski is a software engineer with Zappos.com and an enthusiastic maker at his local Las Vegas hackerspace, SYN Shop. He has been programming ever since his parents bought him a Commodore 64 for Christmas. He is responsible for coming up with a simple open source design for a wooden laser cut sumo bot kit, now available at http://sumobotkit.com. As a result of the popularity of the kit, he was invited to run the robotics workshop at RobotsConf, a JSConf offshoot as well as the next JSConf (which he did attend) and Makerland Conf in his home country of Poland. He developed a healthy passion for teaching robotics through these conferences as well as local NodeBots events and programs with Code for America. He can be found on Twitter @makenai.
Read more
  • 0
  • 0
  • 2844

article-image-why-should-i-make-cross-platform-games
Packt
12 Jan 2015
10 min read
Save for later

Why should I make cross-platform games?

Packt
12 Jan 2015
10 min read
In this article by Emanuele Feronato, author of the book Learning Cocos2d-JS Game Development, we will see why we need to make cross-platform games and how to do it using Cocos2d-JS. This is a very important question. I asked it to myself a lot of times when HTML5 mobile gaming started to become popular. I was just thinking it was a waste of time to simply care about the different screen resolutions and aspect ratios, so my first HTML5 game was made to perfectly fit my iPad 2 tablet. When I finally showed it to sponsors, most of them said something like "Hey, I like the game, but unfortunately it does not look that good on my iPhone". "Don't worry", I said, "you'll get the game optimized for iPad and iPhone". Unfortunately, it did not look that good on the Galaxy Note. Neither did it on the Samsung S4. You can imagine the rest of this story. I found myself almost rewriting the game with a series of if.. then.. else loops, trying to make it look good on any device. This is why you should make a cross-platform game: To code once and rule them all. Focus on game development and let a framework do the dirty work for you. What Cocos2d-JS is and how it works Cocos2d-JS is a free open source 2D game framework. It can help you to develop cross-platform browser games and native applications. This framework allows you to write games in JavaScript. So, if you have already developed JavaScript applications, you don't have to learn a new language from scratch. Throughout this book, you will learn how to create almost any kind of cross-platform game using a familiar and intuitive language. Requirements to run Cocos2d-JS Before you start, let's see what software you need to install on your computer in order to start developing with Cocos2d-JS: Firstly, you need a text editor. The official IDE for Cocos2d-JS coding is Cocos Code IDE, which you can download for free at http://www.cocos2d-x.org/products/codeide. It features auto completion, code hinting, and some more interesting characteristics to speed up your coding. If you are used to your favorite code editor, that's fine. There are plenty of them, but I personally use PSPad (you can find this at http://www.pspad.com/) on my Windows machine and TextWrangler (you can find this at http://www.barebones.com/products/textwrangler/) on the Mac. They are both free and easy to use, so you can download and have them installed in a matter of minutes. To test your Cocos2d-JS projects, you will need to install a web server on your computer to override security limits when running your project locally. I am using WAMP (http://www.wampserver.com/) on my Windows machine, and MAMP (http://www.mamp.info/) on the Mac. Again, both are free to use as you won't need the PRO version, which is also available for Mac computers. Explaining all the theory behind this is beyond the scope of this book, but you can find all the required information as well as the installation documentation on the official sites. If you prefer, you can test your projects directly online by uploading them on an FTP space you own and call them directly from the web. In this case, you don't need to have a web server installed on your computer, but I highly recommend using WAMP or MAMP instead. I personally use Google Chrome as the default browser to test my projects. As these projects are meant to be cross-platform games, it should run in the same way on every browser, so feel free to use the browser you prefer. The latest information about Cocos2d-JS can be found on the official page http://www.cocos2d-x.org/wiki/Cocos2d-JS, while the latest version can be downloaded at http://www.cocos2d-x.org/download. Cocos2d-JS is updated quite frequently, but at the time of writing, the latest stable release is v3.1. Although new releases always bring some changes, all examples included in this book should work fine with any release marked as 3.x as there aren't huge changes in the roadmap. You will notice the download file is a ZIP file that is greater than 250 MB. Don't worry. Most of the content of the package is made by docs, graphic assets, and examples, while the only required folder, at the moment, is the one called cocos2d-html5. The structure of your Cocos2d-JS project Every HTML5 game is basically a web page with some magic in it; this is what you are going to create with Cocos2d-JS: a web page with some magic in it. To perform this magic, a certain file structure needs to be created, so let's take a look at a screenshot of a folder with a Cocos2d-JS project in it: This is what you are going to build; to tell you the truth, this is a picture of the actual project folder I built for the example to be explained in this article, which is placed in the WAMP localhost folder on my computer. It couldn't be any more real. So, let's take a look at the files to be created: cocos2d-html5: This is the folder you will find in the zip archive. index.html: This is the web page that will contain the game. main.js:This is a file required by Cocos2d-JS with the Cocos2d-JS function calls to make the game start. You will create this within the next few minutes. project.json: This is a JavaScript Object Notation (JSON) with some basic configurations. This is what you need to make your game run. Well, almost, because the actual game will be placed in the src folder. Let's see a few other things first. Hello Cross-World The time has come, the boring theory has ended, and we can now start coding our first project. Let's begin! Firstly, create a page called index.html in the root of the game folder and write this HTML code: <!DOCTYPE html> <head>    <title>      My Awesome game    </title>    <script src="cocos2d-html5/CCBoot.js" type="text/javascript"> </script>    <script src="main.js" type="text/javascript"> </script> </head> <body style="padding:0;margin:0;background-color:#000000;"> </body> </html> There's nothing interesting in it as it is just plain HTML. Let's take a closer look at these lines to see what is going on: <script src=" cocos2d-html5/CCBoot.js "></script> Here, I am including the Cocos2d-JS boot file to make the framework start: <script src="main.js"></script> From the preceding line, this is where we call the script with the actual game we are going to build. Next, we have the following code: <canvas id="gameCanvas"></canvas> This is the canvas we will use to display the game. Notice here that the canvas does not have a width and height, as they will be defined by the game itself. Next is the creation of main.js: the only file we will call from our main index.html page. This is more of a configuration file rather than the game itself, so you won't code anything that is game-related at the moment. However, the file you are going to build will be the blueprint you will be using in all your Cocos2d-JS games. The content of main.js is as follows: cc.game.onStart = function(){   cc.view.setDesignResolutionSize(320, 480, cc.ResolutionPolicy.SHOW_ALL);   cc.director.runScene(new gameScene());};cc.game.run(); Don't worry about the code at the moment; it looks a lot more complicated than it really is. At the moment, the only line we have to worry about is the one that defines the resolution policy. One of the most challenging tasks in cross-platform development is to provide a good gaming experience, no matter what browser or what device the game is running on. However, the problem here is that each device has its own resolution, screen size, and ratio. Cocos2d-JS allows us to handle different resolutions in a similar way web designers do when building responsive design. At the moment, we just want to adapt the game canvas to fit the browser window while targeting the most popular resolution, which is 320x480 (portrait mode). That's what this line does: cc.view.setDesignResolutionSize(320, 480, cc.ResolutionPolicy.SHOW_ALL); Using these settings, you should be pretty sure that your game will run on every device, although you will be working in a low resolution. Also, have a look at this line: cc.director.runScene(new gameScene()); Basically, a Cocos2d-JS game is made by a scene where the game itself runs. There can be more scenes in the same game. Imagine a scene with the title screen, a scene with the game over screen, and a scene with the game itself. At the moment, you only have one scene called gameScene. Remember this name because you are going to use it later. Following this, the next required blueprint file you are going to build is project.json, which has some interesting settings. Let's take a look at the file first: {"debugMode" : 0,"showFPS" : false,"frameRate" : 60,"id" : "gameCanvas","renderMode" : 0,"engineDir":"cocos2d-html5/", "modules" : ["cocos2d"], "jsList" : [   "src/gamescript.js"]} What do these lines mean? Let's see them one by one: debugMode: This is the object key that determines the level of debug warnings. It has a range from 0 to 6. Leave it at 0 at the moment since the project is very simple and we won't make any errors. showFPS: This object can be true or false; it shows or hides the FPS meter on the screen. frameRate: This object sets the frame rate of your game. Set it to 60 to have a smooth game. id: This is the DOM element that is required to run the game. Do you remember you gave your canvas the gameCanvas id? Here you are. engineDir: This is the folder where Cocos2d-JS is installed. modules: This object engines the modules to load. At the moment, we only need the basic Cocos2d library. jsList: This is an array with the files used in the game. This means we are going to create our game in src/gamescript.js. Finally, we arrive at the game script itself. This is the one that will contain the actual game, gamescript.js, which at the moment is just a plain declaration of the game scene: var gameScene = cc.Scene.extend({onEnter:function () {   this._super();   console.log("my awesome game starts here");}}); Here, you want to save everything and call index.html page from your localhost (refer to your WAMP or MAMP docs) in your browser. If you now open the developer console, you should see: my awesome game starts here Congratulations! This means you have successfully managed to create a Cocos2d-JS template file to build your future games. Summary In this article we learned the importance of cross-platform games and how to make them using Cocos2d-JS.
Read more
  • 0
  • 0
  • 4131
article-image-creating-simple-gamemanager-using-unity3d
Ellison Leao
09 Jan 2015
5 min read
Save for later

Creating a simple GameManager using Unity3D

Ellison Leao
09 Jan 2015
5 min read
Using the so called "Game Managers" in games is just as common as eating when making games. Probably every game made has their natural flow: Start -> Play -> Pause -> Die -> Game Over , etc. To handle these different game states, we need a proper manager who can provide a mechanism to know when to change to state "A" to state "B" during gameplay. In this post we will show you how to create a simple game manager for Unity3D games. We will assume that you have some previous knowledge in Unity, but if you haven't get the chance to know it, please go to the Official Learn Unity page and get started. We are going to create the scripts using the C# language. 1 - The Singleton Pattern For the implementation, we will use the Singleton pattern. Why? Some reasons: One instance for all the game implementation, with no possible duplications. The instance is never destroyed on scene changes. It stores the current game state to be accessible anytime. We will not explain the design of the Singleton pattern because it's not the purpose of this post. If you wish to know more about it, you can go here. 2 - The GameManager code Create a new project on Unity and add a first csharp script called SimpleGameManager.cs and add the following code: using UnityEngine; using System.Collections; // Game States // for now we are only using these two public enum GameState { INTRO, MAIN_MENU } public delegate void OnStateChangeHandler(); public class SimpleGameManager { protected SimpleGameManager() {} private static SimpleGameManager instance = null; public event OnStateChangeHandler OnStateChange; public GameState gameState { get; private set; } public static SimpleGameManager Instance{ get { if (SimpleGameManager.instance == null){ DontDestroyOnLoad(SimpleGameManager.instance); SimpleGameManager.instance = new SimpleGameManager(); } return SimpleGameManager.instance; } } public void SetGameState(GameState state){ this.gameState = state; OnStateChange(); } public void OnApplicationQuit(){ SimpleGameManager.instance = null; } } Explaining the code in parts, we have: First we are making some enums for easily check the Game State, so for this example we will have: public enum GameState { INTRO, MAIN_MENU } Then we will have an event delegate method that we will use as a callback when a game state changes. This is ideal for changing scenes. public delegate void OnStateChangeHandler(); Moving forward we will have the gameState attribute, that is a getter for the current Game State. public GameState gameState {get; private set;} Then we will have our class. Taking a look at the singleton implementation we can see that we will use the Instance static variable to get our Game Manager current instance or create a new one if it doesn't exists. It's also interesting to see that we call the DontDestroyOnLoad method in the Game Manager instanciation. On doing that, Unity makes sure that our instance is never destroyed between scenes. The method used to change the Game State is SetGameState, which we only need to pass the GameState enum variable as the parameter. public void SetGameState(GameState state){ this.gameState = state; OnStateChange(); } It automatically sets the new gameState for the instance and call the callback OnStateChangemethod. 3 - Creating Sample Scenes For testing our new Game Manager, we will create 2 Unity scenes: Intro and Menu. The Intro scene will just show some debug messages, simulating an Intro game scene, and after 3 seconds it will change to the Menu Scene were we have the Game Menu code. Create a new scene called Intro and create a csharp script called Intro.cs. Put the following code into the script: using UnityEngine; using System.Collections; public class Intro : MonoBehaviour { SimpleGameManager GM; void Awake () { GM = SimpleGameManager.Instance; GM.OnStateChange += HandleOnStateChange; Debug.Log("Current game state when Awakes: " + GM.gameState); } void Start () { Debug.Log("Current game state when Starts: " + GM.gameState); } public void HandleOnStateChange () { GM.SetGameState(GameState.MAIN_MENU); Debug.Log("Handling state change to: " + GM.gameState); Invoke("LoadLevel", 3f); } public void LoadLevel(){ Application.LoadLevel("Menu"); } } You can see here that we just need to call the Game Manager instance inside the Awake method. The same initialization will happen on the others scripts, to get the current Game Manager state. After getting the Game Manager instance we set the OnStateChange event, which is load the Menu scene after 3 seconds. You can notice that the first line of the event sets the new Game State by calling the SetGameState method. If you run this scene however, you will get an error because we don't have the Menu.cs Scene yet. So let's create it! Create a new scene called Menu and add a csharp script called Menu.cs into this Scene. Add the following code to Menu.cs: using UnityEngine; using System.Collections; public class Menu : MonoBehaviour { SimpleGameManager GM; void Awake () { GM = SimpleGameManager.Instance; GM.OnStateChange += HandleOnStateChange; } public void HandleOnStateChange () { Debug.Log("OnStateChange!"); } public void OnGUI(){ //menu layout GUI.BeginGroup (new Rect (Screen.width / 2 - 50, Screen.height / 2 - 50, 100, 800)); GUI.Box (new Rect (0, 0, 100, 200), "Menu"); if (GUI.Button (new Rect (10, 40, 80, 30), "Start")){ StartGame(); } if (GUI.Button (new Rect (10, 160, 80, 30), "Quit")){ Quit(); } GUI.EndGroup(); } public void StartGame(){ //start game scene GM.SetGameState(GameState.GAME); Debug.Log(GM.gameState); } public void Quit(){ Debug.Log("Quit!"); Application.Quit(); } } We added simple Unity GUI elements for this scene just for example. Run the Intro Scene and check the Debug logs, You should see the messages when the Game State is changing from the old state to the new state and keeping the instance between scenes. And there you have it! You can add more GameStates for multiple screens like Credits, High Score, Levels, etc. The code for this examples is on github, feel free to fork and use it in your games! https://github.com/bttfgames/SimpleGameManager About this Author  Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 1
  • 33390

article-image-building-information-radiator-part-2
Andrew Fisher
31 Dec 2014
9 min read
Save for later

Building an information radiator, Part 2

Andrew Fisher
31 Dec 2014
9 min read
Code: https://gist.github.com/ajfisher/844975b824ec96c27c7c // An information radiator light showing the forecast temperature in Melbourne. I love lights; specifically I love LEDs - which have been described to me as "catnip for geeks". LEDs are low powered but bright which means they can be embedded into all sorts of interesting places and, when coupled with a network, can be used for all sorts of ambient display purposes. In the first part in this series I explained how to use an Arduino and an RGB light disc attached to the network in order to create a networked light. In this part I’ll show you how to scrape some data from a weather web site and use that to make your light change colors periodically to show the day or night time forecast temperature. Scraping the data Weather data is easy to get hold of using APIs, however I'm going to do this the old fashioned way as many interesting data sources may not have an API available for you to hit. This technique is going to use good old fashioned html scraping. To scrape the data I'll use a simple python script. If you’ve never used python before then go start here to get installed and get some familiarity with the language. I’ll use the standard library’s urllib2 module to request a page from Accuweather’s site The URL in this case being the for the weather in Melbourne, Australia (where I live). You'll want to change this to your own home town. Once the request comes back, you get the html content of the page. It's possible to parse through all of this using regular expressions, however a python module called beautifulsoup can interpret the response as a document of nodes rather than just text. Install beautifulsoup using: pip install beautifulsoup To get to the appropriate data you need to walk the document structure using selectors. If you read through the html you can see that the temperature data is available in two LIs (li#feed-sml-1 and li#feed-sml-2), both of which use IDs so that makes the job very easy to pull the information out. The resulting text can be cleaned up with some string manipulation. The code below shows how to do this in beautifulsoup in order to get the forecasted max and overnight min temperatures. // code snippet# load this up into beautiful soup                                              soup = BeautifulSoup(response.read())                                           # these IDs were discovered by reading the html and gettng to the               # point where the next forecast occurs                                                                              temp = soup.find('li', {'id':'feed-sml-1'})                                                               # grab the temps and remove the cruft                                           temp_val = int(today.find('strong', {'class':'temp'}).text.replace("&deg;", ""))print("Forecast: %d" % temp_val) The other main part of the code looks at the temperature and makes some decisions about what colours mean what. In this case I'm using the following ranges: <10C is bright blue - it's really chilly 10-20C is green - cool to mild 20-30C is yellow - lovely weather 30-40C is orange - hot >40C is red - really really hot! These ranges are based on the climate I have here in Melbourne where it never gets below 0C but in summer regularly goes over 40, you can adjust these to what makes sense for where you live. This final snippet of code uses the telnet library to connect to the Arduino and send the payload.  tn = Telnet()                                                                   tn.open(arduino_ip)                                                             tn.write(str(colour))                                                           tn.write("n") The full code listing is below:  #!/usr/bin/python # This script will periodically go and check the weather for a given # location and return the max and min forecasted temperature for the next# 24 hours.# Once this is retrieved it sends a message to a networked arduino in the form# of an RGB colour map in order to control a light showing expected temps.## Author:   Andrew Fisher <ajfisher># Version:  0.1 from datetime import datetimefrom telnetlib import Telnetimport urllib2from BeautifulSoup import BeautifulSoup # This is specific to melbourne, change it to your location weather_url = "http://www.accuweather.com/en/au/melbourne/26216/weather-forecast/26216" # details of your arduino on you network. Change to yours arduino_ip = "10.0.1.91" response = urllib2.urlopen(weather_url) # load this up into beautiful soup soup = BeautifulSoup(response.read()) # these IDs were discovered by reading the html and gettng to the # point where the forecast exists. Use the "next" item forecast = soup.find('li', {'id':'feed-sml-1'}) # grab the temps and remove the cruft temp_val = int(forecast.find('strong', {'class':'temp'}).text.replace("&deg;", ""))  print("Forecast temp is %d" % temp_val) # convert to colour rangeif temp_val <= 10:    red = 0    blue = 255    green = 20elif temp_val > 10 and temp_val <=20:    red = 128    blue = 0    green = 255elif temp_val >20 and temp_val <= 30:    red = 128    blue = 0    green = 128elif temp_val > 30 and temp_val <= 40:    red = 255    blue = 0    green = 128else:    red = 255    blue = 0    green = 0 colour = { "r": red, "b": blue,"g": green } # Send message to arduinotn = Telnet()tn.open(arduino_ip)tn.write(str(colour))tn.write("n") You can test this now by running python weather.py If everything is working you will see the arduino light up with the appropriate colour based on your forecast. To automate this, make a scheduled task on windows or a cron job on linux / mac to run each hour to run this script and it will update the light display. Mount your light somewhere you can see it and you’ll be able to determine whether it’s shorts weather or you’ll need an extra blanket on the bed tonight. // A lovely day in Melbourne forecast. Perfect t-shirt weather. ## Going further Now you know how to make an information radiator from a networked device you can make your own. Here are some ideas to take it further Upgrade the protocol to include a duration so the light will turn off or dim after a period of time (good for frequent messages). Use python to talk to an API instead of scraping - eg twitter's public stream looking for keywords. Include multiple lights in a display in order to show probability of rain as well as forecasted temperature Mill, mould or 3d print a light fitting to go around your device and make it a piece of ambient art. About the author Andrew Fisher is a creator (and destroyer) of things that combine mobile web, ubicomp, and lots of data. He is a programmer, interaction researcher, and CTO at JBA, a data consultancy in Melbourne, Australia. He can be found on Twitter at @ajfisher.
Read more
  • 0
  • 0
  • 2896
Modal Close icon
Modal Close icon