Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-protecting-gpg-keys-beaglebone
Packt
24 Sep 2014
23 min read
Save for later

Protecting GPG Keys in BeagleBone

Packt
24 Sep 2014
23 min read
In this article by Josh Datko, author of BeagleBone for Secret Agents, you will learn how to use the BeagleBone Black to safeguard e-mail encryption keys. (For more resources related to this topic, see here.) After our investigation into BBB hardware security, we'll now use that technology to protect your personal encryption keys for the popular GPG software. GPG is a free implementation of the OpenPGP standard. This standard was developed based on the work of Philip Zimmerman and his Pretty Good Privacy (PGP) software. PGP has a complex socio-political backstory, which we'll briefly cover before getting into the project. For the project, we'll treat the BBB as a separate cryptographic co-processor and use the CryptoCape, with a keypad code entry device, to protect our GPG keys when they are not in use. Specifically, we will do the following: Tell you a little about the history and importance of the PGP software Perform basic threat modeling to analyze your project Create a strong PGP key using the free GPG software Teach you to use the TPM to protect encryption keys History of PGP The software used in this article would have once been considered a munition by the U.S. Government. Exporting it without a license from the government, would have violated the International Traffic in Arms Regulations (ITAR). As late as the early 1990s, cryptography was heavily controlled and restricted. While the early 90s are filled with numerous accounts by crypto-activists, all of which are well documented in Steven Levy's Crypto, there is one man in particular who was the driving force behind the software in this project: Philip Zimmerman. Philip Zimmerman had a small pet project around the year 1990, which he called Pretty Good Privacy. Motivated by a strong childhood passion for codes and ciphers, combined with a sense of political activism against a government capable of strong electronic surveillance, he set out to create a strong encryption program for the people (Levy 2001). One incident in particular helped to motivate Zimmerman to finish PGP and publish his work. This was the language that the then U.S. Senator Joseph Biden added to Senate Bill #266, which would mandate that: "Providers of electronic communication services and manufacturers of electronic communications service equipment shall ensure that communication systems permit the government to obtain the plaintext contents of voice, data, and other communications when appropriately authorized by law." In 1991, in a rush to release PGP 1.0 before it was illegal, Zimmerman released his software as a freeware to the Internet. Subsequently, after PGP spread, the U.S. Government opened a criminal investigation on Zimmerman for the violation of the U.S. export laws. Zimmerman, in what is best described as a legal hack, published the entire source code of PGP, including instructions on how to scan it back into digital form, as a book. As Zimmerman describes: "It would be politically difficult for the Government to prohibit the export of a book that anyone may find in a public library or a bookstore."                                                                                                                           (Zimmerman, 1995) A book published in the public domain would no longer fall under ITAR export controls. The genie was out of the bottle; the government dropped its case against Zimmerman in 1996. Reflecting on the Crypto Wars Zimmerman's battle is considered a resilient victory. Many other outspoken supporters of strong cryptography, known as cypherpunks, also won battles popularizing and spreading encryption technology. But if the Crypto Wars were won in the early nineties, why hasn't cryptography become ubiquitous? Well, to a degree, it has. When you make purchases online, it should be protected by strong cryptography. Almost nobody would insist that their bank or online store not use cryptography and most probably feel more secure that they do. But what about personal privacy protecting software? For these tools, habits must change as the normal e-mail, chat, and web browsing tools are insecure by default. This change causes tension and resistance towards adoption. Also, security tools are notoriously hard to use. In the seminal paper on security usability, researchers conclude that the then PGP version 5.0, complete with a Graphical User Interface (GUI), was not able to prevent users, who were inexperienced with cryptography but all of whom had at least some college education, from making catastrophic security errors (Whitten 1999). Glenn Greenwald delayed his initial contact with Edward Snowden for roughly two months because he thought GPG was too complicated to use (Greenwald, 2014). Snowden absolutely refused to share anything with Greenwald until he installed GPG. GPG and PGP enable an individual to protect their own communications. Implicitly, you must also trust the receiving party not to forward your plaintext communication. GPG expects you to protect your private key and does not rely on a third party. While this adds some complexity and maintenance processes, trusting a third party with your private key can be disastrous. In August of 2013, Ladar Levison decided to shut down his own company, Lavabit, an e-mail provider, rather than turn over his users' data to the authorities. Levison courageously pulled the plug on his company rather then turn over the data. The Lavabit service generated and stored your private key. While this key was encrypted to the user's password, it still enabled the server to have access to the raw key. Even though the Lavabit service alleviated users from managing their private key themselves, it enabled the awkward position for Levison. To use GPG properly, you should never turn over your private key. For a complete analysis of Lavabit, see Moxie Marlinspike's blog post at http://www.thoughtcrime.org/blog/lavabit-critique/. Given the breadth and depth of state surveillance capabilities, there is a re-kindled interest in protecting one's privacy. Researchers are now designing secure protocols, with these threats in mind (Borisov, 2014). Philip Zimmerman ended the chapter on Why Do You Need PGP? in the Official PGP User's Guide with the following statement, which is as true today as it was when first inked: "PGP empowers people to take their privacy into their own hands. There's a growing social need for it." Developing a threat model We introduced the concept of a threat model. A threat model is an analysis of the security of the system that identifies assets, threats, vulnerabilities, and risks. Like any model, the depth of the analysis can vary. In the upcoming section, we'll present a cursory analysis so that you can start thinking about this process. This analysis will also help us understand the capabilities and limitations of our project. Outlining the key protection system The first step of our analysis is to clearly provide a description of the system we are trying to protect. In this project, we'll build a logical GPG co-processor using the BBB and the CryptoCape. We'll store the GPG keys on the BBB and then connect to the BBB over Secure Shell (SSH) to use the keys and to run GPG. The CryptoCape will be used to encrypt your GPG key when not in use, known as at rest. We'll add a keypad to collect a numeric code, which will be provided to the TPM. This will allow the TPM to unwrap your GPG key. The idea for this project was inspired by Peter Gutmann's work on open source cryptographic co-processors (Gutmann, 2000). The BBB, when acting as a co-processor to a host, is extremely flexible, and considering the power usage, relatively high in performance. By running sensitive code that will have access to cleartext encryption keys on a separate hardware, we gain an extra layer of protection (or at the minimum, a layer of indirection). Identifying the assets we need to protect Before we can protect anything, we must know what to protect. The most important assets are the GPG private keys. With these keys, an attacker can decrypt past encrypted messages, recover future messages, and use the keys to impersonate you. By protecting your private key, we are also protecting your reputation, which is another asset. Our decrypted messages are also an asset. An attacker may not care about your key if he/she can easily access your decrypted messages. The BBB itself is an asset that needs protecting. If the BBB is rendered inoperable, then an attacker has successfully prevented you from accessing your private keys, which is known as a Denial-Of-Service (DOS). Threat identification To identify the threats against our system, we need to classify the capabilities of our adversaries. This is a highly personal analysis, but we can generalize our adversaries into three archetypes: a well funded state actor, a skilled cracker, and a jealous ex-lover. The state actor has nearly limitless resources both from a financial and personnel point of view. The cracker is a skilled operator, but lacks the funding and resources of the state actor. The jealous ex-lover is not a sophisticated computer attacker, but is very motivated to do you harm. Unfortunately, if you are the target of directed surveillance from a state actor, you probably have much bigger problems than your GPG keys. This actor can put your entire life under monitoring and why go through the trouble of stealing your GPG keys when the hidden video camera in the wall records everything on your screen. Also, it's reasonable to assume that everyone you are communicating with is also under surveillance and it only takes one mistake from one person to reveal your plans for world domination. The adage by Benjamin Franklin is apropos here: Three may keep a secret if two of them are dead. However, properly using GPG will protect you from global passive surveillance. When used correctly, neither your Internet Service Provider, nor your e-mail provider, or any passive attacker would learn the contents of your messages. The passive adversary is not going to engage your system, but they could monitor a significant amount of Internet traffic in an attempt to collect it all. Therefore, the confidentiality of your message should remain protected. We'll assume the cracker trying to harm you is remote and does not have physical access to your BBB. We'll also assume the worst case that the cracker has compromised your host machine. In this scenario there is, unfortunately, a lot that the cracker can perform. He can install a key logger and capture everything, including the password that is typed on your computer. He will not be able to get the code that we'll enter on the BBB; however, he would be able to log in to the BBB when the key is available. The jealous ex-lover doesn't understand computers very well, but he doesn't need to, because he knows how to use a golf club. He knows that this BBB connected to your computer is somehow important to you because you've talked his ear off about this really cool project that you read in a book. He physically can destroy the BBB and with it, your private key (and probably the relationship as well!). Identifying the risks How likely are the previous risks? The risk of active government surveillance in most countries is fortunately low. However, the consequences of this attack are very damaging. The risk of being caught up in passive surveillance by a state actor, as we have learned from Edward Snowden, is very likely. However, by using GPG, we add protection against this threat. An active cracker seeking you harm is probably unlikely. Contracting keystroke-capturing malware, however, is probably not an unreasonable event. A 2013 study by Microsoft concluded that 8 out of every 1,000 computers were infected with malware. You may be tempted to play these odds but let's rephrase this statement: in a group of 125 computers, one is infected with malware. A school or university easily has more computers than this. Lastly, only you can assess the risk of a jealous ex-lover. For the full Microsoft report, refer to http://blogs.technet.com/b/security/archive/2014/03/31/united-states-malware-infection-rate-more-than-doubles-in-the-first-half-of-2013.aspx. Mitigating the identified risks If you find yourself the target of a state, this project alone is not going to help much. We can protect ourselves somewhat from the cracker with two strategies. The first is instead of connecting the BBB to your laptop or computer, you can use the BBB as a standalone machine and transfer files via a microSD card. This is known as an air-gap. With a dedicated monitor and keyboard, it is much less likely for software vulnerabilities to break the gap and infect the BBB. However, this comes as a high level of personal inconvenience, depending on how often you encrypt files. If you consider the risk of running the BBB attached to your computer too high, create an air-gapped BBB for maximum protection. If you deem the risk low, because you've hardened your computer and have other protection mechanism, then keep the BBB attached to the computer. An air-gapped computer can still be compromised. In 2010, a highly specialized worm known as Stuxnet was able to spread to networked isolated machines through USB flash drives. The second strategy is to somehow enter the GPG passphrase directly into the BBB without using the host's keyboard. After we complete the project, we'll suggest a mechanism to do this, but it is slightly more complicated. This would eliminate the threat of the key logger since the pin is directly entered. The mitigation against the ex-lover is to treat your BBB as you would your own wallet, and don't leave it out of your sight. It's slightly larger than you would want, but it's certainly small enough to fit in a small backpack or briefcase. Summarizing our threat model Our threat model, while cursory, illustrates the thought process one should go through before using or developing security technologies. The term threat model is specific to the security industry, but it's really just proper planning. The purpose of this analysis is to find logic bugs and prevent you from spending thousands of dollars on high-tech locks for your front door when you keep your backdoor unlocked. Now that we understand what we are trying to protect and why it is important to use GPG, let's build the project. Generating GPG keys First, we need to install GPG on the BBB. It is mostly likely already installed, but you can check and install it with the following command: sudo apt-get install gnupg gnupg-curl Next, we need to add a secret key. For those that already have a secret key, you can import your secret key ring, secring.gpg, to your ~/.gnupg folder. For those that want to create a new key, on the BBB, proceed to the upcoming section. This project assumes some familiarity with GPG. If GPG is new to you, the Free Software Foundation maintains the Email Self-Defense guide which is a very approachable introduction to the software and can be found at https://emailselfdefense.fsf.org/en/index.html. Generating entropy If you decided to create a new key on the BBB, there are a few technicalities we must consider. First of all, GPG will need a lot of random data to generate the keys. The amount of random data available in the kernel is proportional to the amount of entropy that is available. You can check the available entropy with the following command: cat /proc/sys/kernel/random/entropy_avail If this command returns a relatively low number, under 200, then GPG will not have enough entropy to generate a key. On a PC, one can increase the amount of entropy by interacting with the computer such as typing on the keyboard or moving the mouse. However, such sources of entropy are difficult for embedded systems, and in our current setup, we don't have the luxury of moving a mouse. Fortunately, there are a few tools to help us. If your BBB is running kernel version 3.13 or later, we can use the hardware random number generator on the AM3358 to help us out. You'll need to install the rng-tools package. Once installed, you can edit /etc/default/rng-tools and add the following line to register the hardware random number generated for rng-tools: HRNGDEVICE=/dev/hwrng After this, you should start the rng-tools daemon with: /etc/init.d/rng-tools start If you don't have /dev/hwrng—and currently, the chips on the CryptoCape do not yet have character device support and aren't available to /dev/hwrng—then you can install haveged. This daemon implements the Hardware Volatile Entropy Gathering and Expansion (HAVEGE) algorithm, the details of which are available at http://www.irisa.fr/caps/projects/hipsor/. This daemon will ensure that the BBB maintains a pool of entropy, which will be sufficient for generating a GPG key on the BBB. Creating a good gpg.conf file Before you generate your key, we need to establish some more secure defaults for GPG. As we discussed earlier, it is still not as easy as it should be to use e-mail encryption. Riseup.net, an e-mail provider with a strong social cause, maintains an OpenPGP best practices guide at https://help.riseup.net/en/security/message-security/openpgp/best-practices. This guide details how to harden your GPG configuration and provides the motivation behind each option. It is well worth a read to understand the intricacies of GPG key management. Jacob Applebaum maintains an implementation of these best practices, which you should download from https://github.com/ioerror/duraconf/raw/master/configs/gnupg/gpg.conf and save as your ~/.gnupg/gpg.conf file. The configuration is well commented and you can refer to the best practices guide available at Riseup.net for more information. There are three entries, however, that you should modify. The first is default-key, which is the fingerprint of your primary GPG key. Later in this article, we'll show you how to retrieve that fingerprint. We can't perform this action now because we don't have a key yet. The second is keyserver-options ca-cert-file, which is the certificate authority for the keyserver pool. Keyservers host your public keys and a keyserver pool is a redundant collection of keyservers. The instructions on Riseup.net gives the details on how to download and install that certificate. Lastly, you can use Tor to fetch updates on your keys. The act of you requesting a public key from a keyserver signals that you have a potential interest in communicating with the owner of that key. This metadata might be more interesting to a passive adversary than the contents of your message, since it reveals your social network. Tor is apt at protecting traffic analysis. You probably don't want to store your GPG keys on the same BBB as your bridge, so a second BBB would help here. On your GPG BBB, you need to only run Tor as a client, which is its default configuration. Then you can update keyserver-options http-proxy to point to your Tor SOCKS proxy running on localhost. The Electronic Frontier Foundation (EFF) provides some hypothetical examples on the telling nature of metadata, for example, They (the government) know you called the suicide prevention hotline from the Golden Gate Bridge. But the topic of the call remains a secret. Refer to the EFF blog post at https://www.eff.org/deeplinks/2013/06/why-metadata-matters for more details. Generating the key Now you can generate your GPG key. Follow the on screen instructions and don't include a comment. Depending on your entropy source, this could take a while. This example took 10 minutes using haveged as the entropy collector. There are various opinions on what to set as the expiration date. If this is your first GPG, try one year at first. You can always make a new key or extend the same one. If you set the key to never expire and you lose the key, by forgetting the passphrase, people will still think it's valid unless you revoke it. Also, be sure to set the user ID to a name that matches some sort of identification, which will make it easier for people to verify that the holder of the private key is the same person as a certified piece of paper. The command to create a new key is gpg –-gen-key: Please select what kind of key you want:    (1) RSA and RSA (default)    (2) DSA and Elgamal    (3) DSA (sign only)    (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) 4096 Requested keysize is 4096 bits Please specify how long the key should be valid.          0 = key does not expire      <n> = key expires in n days      <n>w = key expires in n weeks      <n>m = key expires in n months      <n>y = key expires in n years Key is valid for? (0) 1y Key expires at Sat 06 Jun 2015 10:07:07 PM UTC Is this correct? (y/N) y   You need a user ID to identify your key; the software constructs the user ID from the Real Name, Comment and Email Address in this form:    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"   Real name: Tyrone Slothrop Email address: tyrone.slothrop@yoyodyne.com Comment: You selected this USER-ID:    "Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>"   Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key.   We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. ......+++++ ..+++++   gpg: key 0xABD9088171345468 marked as ultimately trusted public and secret key created and signed.   gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid:   1 signed:   0 trust: 0-, 0q, 0n, 0m, 0f, 1u gpg: next trustdb check due at 2015-06-06 pub   4096R/0xABD9088171345468 2014-06-06 [expires: 2015-06-06]      Key fingerprint = CBF9 1404 7214 55C5 C477 B688 ABD9 0881 7134 5468 uid                 [ultimate] Tyrone Slothrop <tyrone.slothrop@yoyodyne.com> sub   4096R/0x9DB8B6ACC7949DD1 2014-06-06 [expires: 2015-06-06]   gpg --gen-key 320.62s user 0.32s system 51% cpu 10:23.26 total From this example, we know that our secret key is 0xABD9088171345468. If you end up creating multiple keys, but use just one of them more regularly, you can edit your gpg.conf file and add the following line: default-key 0xABD9088171345468 Postgeneration maintenance In order for people to send you encrypted messages, they need to know your public key. Having your public key server can help distribute your public key. You can post your key as follows, and replace the fingerprint with your primary key ID: gpg --send-keys 0xABD9088171345468 GPG does not rely on third parties and expects you to perform key management. To ease this burden, the OpenPGP standards define the Web-of-Trust as a mechanism to verify other users' keys. Details on how to participate in the Web-of-Trust can be found in the GPG Privacy Handbook at https://www.gnupg.org/gph/en/manual/x334.html. You are also going to want to create a revocation certificate. A revocation certificate is needed when you want to revoke your key. You would do this when the key has been compromised, say if it was stolen. Or more likely, if the BBB fails and you can no longer access your key. Generate the certificate and follow the ensuing prompts replacing the ID with your key ID: gpg --output revocation-certificate.asc --gen-revoke 0xABD9088171345468   sec 4096R/0xABD9088171345468 2014-06-06 Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>   Create a revocation certificate for this key? (y/N) y Please select the reason for the revocation: 0 = No reason specified 1 = Key has been compromised 2 = Key is superseded 3 = Key is no longer used Q = Cancel (Probably you want to select 1 here) Your decision? 0 Enter an optional description; end it with an empty line: >  Reason for revocation: No reason specified (No description given) Is this okay? (y/N) y   You need a passphrase to unlock the secret key for user: "Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>" 4096-bit RSA key, ID 0xABD9088171345468, created 2014-06-06   ASCII armored output forced. Revocation certificate created.   Please move it to a medium which you can hide away; if Mallory gets access to this certificate he can use it to make your key unusable. It is smart to print this certificate and store it away, just in case your media become unreadable. But have some caution: The print system of your machine might store the data and make it available to others! Do take the advice and move this file off the BeagleBone. Printing it out and storing it somewhere safe is a good option, or burn it to a CD. The lifespan of a CD or DVD may not be as long as you think. The United States National Archives Frequently Asked Questions (FAQ) page on optical storage media states that: "CD/DVD experiential life expectancy is 2 to 5 years even though published life expectancies are often cited as 10 years, 25 years, or longer." Refer to their website http://www.archives.gov/records-mgmt/initiatives/temp-opmedia-faq.html for more details. Lastly, create an encrypted backup of your encryption key and consider storing that in a safe location on durable media. Using GPG With your GPG private key created or imported, you can now use GPG on the BBB as you would on any other computer. You may have already installed Emacs on your host computer. If you follow the GNU/Linux instructions, you can also install Emacs on the BBB. If you do, you'll enjoy automatic GPG encryption and decryption for files that end in the .gpg extension. For example, suppose you want to send a message to your good friend, Pirate Prentice, whose GPG key you already have. Compose your message in Emacs, and then save it with a .gpg extension. Emacs will prompt you to select the public keys for encryption and will automatically encrypt the buffer. If a GPG-encrypted message is encrypted to a public key, with which you have the corresponding private key, Emacs will automatically decrypt the message if it ends with .gpg. When using Emacs from the terminal, the prompt for encryption should look like the following screenshot: Summary This article covered and taught you about how GPG can protect e-mail confidentiality Resources for Article: Further resources on this subject: Making the Unit Very Mobile - Controlling Legged Movement [Article] Pulse width modulator [Article] Home Security by BeagleBone [Article]
Read more
  • 0
  • 0
  • 23042

article-image-how-to-build-a-recommender-by-running-mahout-on-spark
Pat Ferrel
24 Sep 2014
7 min read
Save for later

How to Build a Recommender by Running Mahout on Spark

Pat Ferrel
24 Sep 2014
7 min read
Mahout on Spark: Recommenders There are big changes happening in Apache Mahout. For several years it was the go-to machine learning library for Hadoop. It contained most of the best-in-class algorithms for scalable machine learning, which means clustering, classification, and recommendations. But it was written for Hadoop and MapReduce. Today a number of new parallel execution engines show great promise in speeding calculations by 10-100x (Spark, H2O, Flink). That means that instead of buying 10 computers for a cluster, one will do. That should get your manager’s attention. After releasing Mahout 0.9, the team decided to begin an aggressive retool using Spark, building in the flexibility to support other engines, and both H2O and Flink have shown active interest. This post is about moving the heart of Mahout’s item-based collaborative filtering recommender to Spark. Where we are Mahout is currently on the 1.0 snapshot version, meaning we are working on what will be released as 1.0. For the past year or, some of the team has been working on a Scala-based DSL (Domain Specific Language), which looks like Scala with R-like algebraic expressions. Since Scala supports not only operator overloading but functional programming, it is a natural choice for building distributed code with rich linear algebra expressions. Currently we have an interactive shell that runs Scala with all of the R-like expression support. Think of it as R but supporting truly huge data in a completely distributed way. Many algorithms—the ones that can be expressed as simple linear algebra equations—are implemented with relative ease (SSVD, PCA). Scala also has lazy evaluation, which allows Mahout to slide a modern optimizer underneath the DSL. When an end product of a calculation is needed, the optimizer figures out the best path to follow and spins off the most efficient Spark jobs to accomplish the whole. Recommenders One of the first things we want to implement is the popular item-based recommenders. But here, too, we’ll introduce many innovations. It still starts from some linear algebra. Let’s take the case of recommending purchases on an e-commerce site. The problem can be defined in the following code example: r_p = recommendations for purchases for a given user. This is a vector of item-ids and strengths of recommendation. h_p = history of purchases for a given user A = the matrix of all purchases by all users. Rows are users, columns items, for now we will just flag a purchase so the matrix is all ones and zeros. r_p = [A’A]h_p A’A is the matrix A transposed then multiplied by A. This is the core cooccurrence or indicator matrix used in this style of recommender. Using the Mahout Scala DSL we could write the recommender as: val recs = (A.t %*% A) * userHist This would produce a reasonable recommendation, but from experience we know that A’A is better calculated using a method called the Log Likelihood Ratio, which is a probabilistic measure of the importance of a cooccurrence (http://en.wikipedia.org/wiki/Likelihood-ratio_test). In general, when you see something like A’A, it can be replaced with a similarity comparison for each row with every other row. This will produce a matrix whose rows are items and whose columns are the same items. The magnitude of the value in the matrix determines the strength of similarity of row item to the column item. In recommenders, the more similar the items, the more they were purchased by similar people. The previous line of code is replaced by the following: val recs = CooccurrenceAnalysis.cooccurence(A)* userHist However, this would take time to execute for each user as they visit the e-commerce site, so we’ll handle that outside of Mahout. First let’s talk about data preparation. Item Similarity Driver Creating the indicator matrix (A’A) is the core of this type of recommender. We have a quick flexible way to create this using text log files and creating output that is in an easy form to digest. The job of data prep is greatly streamlined in the Mahout 1.0 snapshot. In the past a user would have to do all the data prep themselves. This requires translating their own user and item IDs into Mahout IDs, putting the data into text tuple files and feeding them to the recommender. Out the other end you’d get a Hadoop binary file called a sequence file and you’d have to translate the Mahout IDs into something your application could understand. This is no longer required. To make this process much simpler we created the spark-itemsimilarity command line tool. After installing Mahout, Hadoop, and Spark, and assuming you have logged user purchases in some directories in HDFS, we can probably read them in, calculate the indicator matrix, and write it out with no other prep required. The spark-itemsimilarity command line takes in text-delimited files, extracts the user ID and item ID, runs the cooccurrence analysis, and outputs a text file with your application’s user and item IDs restored. Here is the sample input file where we’ve specified a simple comma-delimited format that field holds the user ID, the item ID, and the filter—purchase: Thu Jul 10 10:52:10.996,u1,purchase,iphone Fri Jul 11 13:52:51.018,u1,purchase,ipad Fri Jul 11 21:42:26.232,u2,purchase,nexus Sat Jul 12 09:43:12.434,u2,purchase,galaxy Sat Jul 12 19:58:09.975,u3,purchase,surface Sat Jul 12 23:08:03.457,u4,purchase,iphone Sun Jul 13 14:43:38.363,u4,purchase,galaxy spark-itemsimilarity will create a Spark distributed dataset (rdd) to back the Mahout DRM (distributed row matrix) that holds this data: User/item iPhone IPad Nexus Galaxy Surface u1 1 1 0 0 0 u2 0 0 1 1 0 u3 0 0 0 0 1 u4 1 0 0 1 0 The output of the job is the LLR computed “indicator matrix” and will contain this data: Item/item iPhone iPad Nexus Galaxy Surface iPhone 0 1.726092435 0 0 0 iPad 1.726092435 0 0 0 0 Nexus 0 0 0 1.726092435 0 Galaxy 0 0 1.726092435 0 0 Surface 0 0 0 0 0 Reading this, we see that self-similarities have been removed so the diagonal is all zeros. The iPhone is similar to the iPad and the Galaxy is similar to the Nexus. The output of the spark-itemsimilarity job can be formatted in various ways but by default it looks like this: galaxy<tab>nexus:1.7260924347106847 ipad<tab>iphone:1.7260924347106847 nexus<tab>galaxy:1.7260924347106847 iphone<tab>ipad:1.7260924347106847 surface On the e-commerce site for the page displaying the Nexus we can show that the Galaxy was purchased by the same people. Notice that application specific IDs are preserved here and the text file is very easy to parse in text-delimited format. The numeric values are the strength of similarity, and for the cases where there are many similar products, you can sort on that value if you want to show only the highest weighted recommendations. Still, this is only part way to an individual recommender. We have done the [A’A] part, but now we need to do the [A’A]h_p. Using the current user’s purchase history will personalize the recommendations. The next post in this series will talk about using a search engine to take this last step. About the author Pat is a serial entrepreneur, consultant, and Apache Mahout committer working on the next generation of Spark-based recommenders. He lives in Seattle and can be contacted through his site https://finderbots.com or @occam on Twitter. Want more Spark? We've got you covered - click here.
Read more
  • 0
  • 0
  • 5915

article-image-how-to-build-a-recommender-server-with-mahout-and-solr
Pat Ferrel
24 Sep 2014
8 min read
Save for later

How to Build a Recommender Server with Mahout and Solr

Pat Ferrel
24 Sep 2014
8 min read
In the last post, Mahout on Spark: Recommenders we talked about creating a co-occurrence indicator matrix for a recommender using Mahout. The goals for a recommender are many, but first they must be fast and must make “good” personalized recommendations. We’ll start with the basics and improve on the “good” part as we go. As we saw last time, co-occurrence or item-based recommenders are described by: rp = hp[AtA] Calculating [AtA] We needed some more interesting data first so I captured video preferences by mining the web. The target demo would be a Guide to Online Video. Input was collected for many users by simply logging their preferences to CSV text files: 906507914,dislike,mars_needs_moms 906507914,dislike,mars_needs_moms 906507914,like,moneyball 906507914,like,the_hunger_games 906535685,like,wolf_creek 906535685,like,inside 906535685,like,le_passe 576805712,like,persepolis 576805712,dislike,barbarella 576805712,like,gentlemans_agreement 576805712,like,europa_report 576805712,like,samsara 596511523,dislike,a_hijacking 596511523,like,the_kings_speech … The technique for actually using multiple actions hasn’t been described yet so for now we’ll use the dislikes in the application to filter out recommendations and use only the likes to calculate recommendations. That means we need to use only the “like” preferences. The Mahout 1.0 version of spark-itemsimilarity can read these files directly and filter out all but the lines with “like” in them: mahout spark-itemsimilarity –i root-input-dir -o output-dir --filter1 like -fc 1 –ic 2 #filter out all but likes #indicate columns for filter and items --omitStrength This will give us output like this: the_hunger_games_catching_fire<tab>holiday_inn 2_guns superman_man_of_steel five_card_stud district_13 blue_jasmine this_is_the_end riddick ... law_abiding_citizen<tab>stone ong_bak_2 the_grey american centurion edge_of_darkness orphan hausu buried ... munich<tab>the_devil_wears_prada brothers marie_antoinette singer brothers_grimm apocalypto ghost_dog_the_way_of_the_samurai ... private_parts<tab>puccini_for_beginners finding_forrester anger_management small_soldiers ice_age_2 karate_kid magicians ... world-war-z<tab>the_wolverine the_hunger_games_catching_fire ghost_rider_spirit_of_vengeance holiday_inn the_hangover_part_iii ... This is a tab delimited file with a video itemID followed by a string containing a list of similar videos that is space delimited. The similar video list may contain some surprises because here “similarity” means “liked by similar people”. It doesn’t mean the videos were similar in content or genre, so don’t worry if they look odd. We’ll use another technique to make “on subject” recommendations later. Anyone familiar with the Mahout first generation recommender will notice right away that we are using IDs that have meaning to the application, whereas before Mahout required its own integer IDs. A fast, scalable similarity engine In Mahout’s first generation recommenders, all recs were calculated for all users. This meant that new users would have to wait for a long running batch job to happen before they saw recommendations. In the Guide demo app, we want to make good recommendations to new users and use new preferences in real time. We have already calculated [AtA] indicating item similarities so we need a real-time method for the final part of the equation rp = hp[AtA]. Capturing hp is the first task and in the demo we log all actions to a database in real time. This may have scaling issues but is fine for a demo. Now we will make use of the “multiply as similarity” idea we introduced in the first post. Multiplying hp[AtA] can be done with a fast similarity engine—otherwise known as a search engine. At their core, search engines are primarily similarity engines that index textual data (AKA a matrix of token vectors) and take text as the query (a token vector). Another way to look at this is that search engines find by example—they are optimized to find a collection of items by similarity to the query. We will use the search engine to find the most similar indicator vectors in [AtA] to our query hp, thereby producing rp. Using this method, rp will be the list of items returned from the search—row IDs in [AtA]. Spark-itemsimilarity is designed to create output that can be directly indexed by search engines. In the Guide demo we chose to create a catalog of items in a database and to use Solr to index columns in the database. Both Solr and Elasticsearch have highly scalable fast engines that can perform searches on database columns or a variety of text formats so you don’t have to use a database to store the indicators. We loaded the indicators along with some metadata about the items into the database like this: itemID foreign-key genres indicators 123 world-war-z sci-fi action the_wolverine … 456 captain_phillips action drama pi when_com… … So, the foreign-key is our video item ID from the indicator output and the indicator is the space-delimited list of similar video item IDs. We must now set the search engine to index the indicators. This integration is usually pretty simple and depends on what database you use or if you are storing the entire catalog in files (leaving the database out of the loop). Once you’ve triggered the indexing of your indicators, we are ready for the query. The query will be a preference history vector consisting of the same tokens/video item IDs you see in the indicators. For a known user these should be logged and available, perhaps in the database, but for a new user we’ll have to find a way to encourage preference feedback. New users The demo site asks a new user to create an account and run through a trainer that collects important preferences from the user. We can probably leave the details of how to ask for “important” preferences for later. Suffice to say, we clustered items and took popular ones from each cluster so that the users were more likely to have seen them. From this we see that the user liked: argo django iron_man_3 pi looper … Whether you are responding to a new user or just accounting for the most recent preferences of returning users, recentness is very important. Using the previous IDs as a query on the indicator field of the database returns recommendations, even though the new user’s data was not used to train the recommender. Here’s what we get: The first line shows the result of the search engine query for the new user. The trainer on the demo site has several pages of examples to rate and the more you rate the better the recommendations become, as one would expect but these look pretty good given only 9 ratings. I can make a value judgment because they were rated by me. In a small sampling of 20 people using the site and after having them complete the entire 20 pages of training examples, we asked them to tell us how many of the recommendations on the first line were things they liked or would like to see. We got 53-90% right. Only a few people participated and your data will vary greatly but this was at least some validation. The second line of recommendations and several more below it are calculated using a genre and this begins to show the power of the search engine method. In the trainer I picked movies where the number 1 genre was “drama”. If you have the search engine index both indicators as well as genres you can combine indicator and genre preferences in the query. To produce line 1 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” To produce line 2 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” genre field: “drama”; boost: 5 The boost is used to skew results towards a field. In practice this will give you mostly matching genres but is not the same as a filter, which can also be used if you want a guarantee that the results will be from “drama”. Conclusion Combining a search engine with Mahout created a recommender that is extremely fast and scalable but also seamlessly blends results using collaborative filtering data and metadata. Using metadata boosting in the query allows us to skew results in a direction that makes sense.  Using multiple fields in a single query gives us even more power than this example shows. It allows us to mix in different actions. Remember the “dislike” action that we discarded? One simple and reasonable way to use that is to filter results by things the user disliked, and the demo site does just that. But we can go even further; we can use dislikes in a cross-action recommender. Certain of the user’s dislikes might even predict what they will like, but that requires us to go back to the original equation so we’ll leave it for another post.  About the author Pat is a serial entrepreneur, consultant, and Apache Mahout committer working on the next generation of Spark-based recommenders. He lives in Seattle and can be contacted through his site at https://finderbots.com or @occam on Twitter.
Read more
  • 0
  • 0
  • 7273

article-image-exploring-usages-delphi
Packt
24 Sep 2014
12 min read
Save for later

Exploring the Usages of Delphi

Packt
24 Sep 2014
12 min read
This article written by Daniele Teti, the author of Delphi Cookbook, explains the process of writing enumerable types. It also discusses the steps to customize FireMonkey controls. (For more resources related to this topic, see here.) Writing enumerable types When the for...in loop was introduced in Delphi 2005, the concept of enumerable types was also introduced into the Delphi language. As you know, there are some built-in enumerable types. However, you can create your own enumerable types using a very simple pattern. To make your container enumerable, implement a single method called GetEnumerator, that must return a reference to an object, interface, or record, that implements the following three methods and one property (in the sample, the element to enumerate is TFoo):    function GetCurrent: TFoo;    function MoveNext: Boolean;    property Current: TFoo read GetCurrent; There are a lot of samples related to standard enumerable types, so in this recipe you'll look at some not-so-common utilizations. Getting ready In this recipe, you'll see a file enumerable function as it exists in other, mostly dynamic, languages. The goal is to enumerate all the rows in a text file without actual opening, reading and closing the file, as shown in the following code: var row: String; begin for row in EachRows('....myfile.txt') do WriteLn(row); end; Nice, isn't it? Let's start… How to do it... We have to create an enumerable function result. The function simply returns the actual enumerable type. This type is not freed automatically by the compiler so you've to use a value type or an interfaced type. For the sake of simplicity, let's code to return a record type: function EachRows(const AFileName: String): TFileEnumerable; begin Result := TFileEnumerable.Create(AFileName); end; The TFileEnumerable type is defined as follows: type TFileEnumerable = record private FFileName: string; public constructor Create(AFileName: String); function GetEnumerator: TEnumerator<String>; end; . . . constructor TFileEnumerable.Create(AFileName: String); begin FFileName := AFileName; end; function TFileEnumerable.GetEnumerator: TEnumerator<String<; begin Result := TFileEnumerator.Create(FFileName); end; No logic here; this record is required only because you need a type that has a GetEnumerator method defined. This method is called automatically by the compiler when the type is used on the right side of the for..in loop. An interesting thing happens in the TFileEnumerator type, the actual enumerator, declared in the implementation section of the unit. Remember, this object is automatically freed by the compiler because it is the return of the GetEnumerator call: type TFileEnumerator = class(TEnumerator<String>) private FCurrent: String; FFile: TStreamReader; protected constructor Create(AFileName: String); destructor Destroy; override; function DoGetCurrent: String; override; function DoMoveNext: Boolean; override; end; { TFileEnumerator } constructor TFileEnumerator.Create(AFileName: String); begin inherited Create; FFile := TFile.OpenText(AFileName); end; destructor TFileEnumerator.Destroy; begin FFile.Free; inherited; end; function TFileEnumerator.DoGetCurrent: String; begin Result := FCurrent; end; function TFileEnumerator.DoMoveNext: Boolean; begin Result := not FFile.EndOfStream; if Result then FCurrent := FFile.ReadLine; end; The enumerator inherits from TEnumerator<String> because each row of the file is represented as a string. This class also gives a mechanism to implement the required methods. The DoGetCurrent (called internally by the TEnumerator<T>.GetCurrent method) returns the current line. The DoMoveNext method (called internally by the TEnumerator<T>.MoveNext method) returns true or false if there are more lines to read in the file or not. Remember that this method is called before the first call to the GetCurrent method. After the first call to the DoMoveNext method, FCurrent is properly set to the first row of the file. The compiler generates a piece of code similar to the following pseudo code: it = typetoenumerate.GetEnumerator; while it.MoveNext do begin S := it.Current; //do something useful with string S end it.free; There's more… Enumerable types are really powerful and help you to write less, and less error prone, code. There are some shortcuts to iterate over in-place data without even creating an actual container. If you have a bounce or integers or if you want to create a not homogenous for loop over some kind of data type, you can use the new TArray<T> type as shown here: for i in TArray<Integer>.Create(2, 4, 8, 16) do WriteLn(i); //write 2 4 8 16 TArray<T> is a generic type, so the same works also for strings: for s in TArray<String>.Create('Hello','Delphi','World') do WriteLn(s); It can also be used for Plain Old Delphi Object (PODO) or controls: for btn in TArray<TButton>.Create(btn1, btn31,btn2) do btn.Enabled := false See also http://docwiki.embarcadero.com/RADStudio/XE6/en/Declarations_and_Statements#Iteration_Over_Containers_Using_For_statements: This Embarcadero documentation will provide a detailed introduction to enumerable types Giving a new appearance to the standard FireMonkey controls using styles Since Version XE2, RAD Studio includes FireMonkey. FireMonkey is an amazing library. It is a really ambitious target for Embarcadero, but it's important for its long-term strategy. VCL is and will remain a Windows-only library, while FireMonkey has been designed to be completely OS and device independent. You can develop one application and compile it anywhere (if anywhere is contained in Windows, OS X, Android, and iOS; let's say that is a good part of anywhere). Getting ready A styled component doesn't know how it will be rendered on the screen, but the style. Changing the style, you can change the aspect of the component without changing its code. The relation between the component code and style is similar to the relation between HTML and CSS, one is the content and another is the display. In terms of FireMonkey, the component code contains the actual functionalities the component has, but the aspect is completely handled by the associated style. All the TStyledControl child classes support styles. Let's say you have to create an application to find a holiday house for a travel agency. Your customer wants a nice-looking application to search for the dream house their customers. Your graphic design department (if present) decided to create a semitransparent look-and-feel, as shown in the following screenshot, and you've to create such an interface. How to do that? This is the UI we want How to do it… In this case, you require some step-by-step instructions, so here they are: Create a new FireMonkey desktop application (navigate to File | New | FireMonkey Desktop Application). Drop a TImage component on the form. Set its Align property to alClient, and use the MultiResBitmap property and its property editor to load a nice-looking picture. Set the WrapMode property to iwFit and resize the form to let the image cover the entire form. Now, drop a TEdit component and a TListBox component over the TImage component. Name the TEdit component as EditSearch and the TListBox component as ListBoxHouses. Set the Scale property of the TEdit and TListBox components to the following values: Scale.X: 2 Scale.Y: 2 Your form should now look like this: The form with the standard components The actions to be performed by the users are very simple. They should write some search criteria in the Edit field and click on Return. Then, the listbox shows all the houses available for that criteria (with a "contains" search). In a real app, you require a database or a web service to query, but this is a sample so you'll use fake search criteria on fake data. Add the RandomUtilsU.pas file from the Commons folder of the project and add it to the uses clause of the main form. Create an OnKeyUp event handler for the TEdit component and write the following code inside it: procedure TForm1.EditSearchKeyUp(Sender: TObject;      var Key: Word; var KeyChar: Char; Shift: TShiftState); var I: Integer; House: string; SearchText: string; begin if Key <> vkReturn then    Exit;   // this is a fake search... ListBoxHouses.Clear; SearchText := EditSearch.Text.ToUpper; //now, gets 50 random houses and match the criteria for I := 1 to 50 do begin    House := GetRndHouse;    if House.ToUpper.Contains(SearchText) then      ListBoxHouses.Items.Add(House); end; if ListBoxHouses.Count > 0 then    ListBoxHouses.ItemIndex := 0 else    ListBoxHouses.Items.Add('<Sorry, no houses found>'); ListBoxHouses.SetFocus; end; Run the application and try it to familiarize yourself with the behavior. Now, you have a working application, but you still need to make it transparent. Let's start with the FireMonkey Style Designer (FSD). Just to be clear, at the time of writing, the FireMonkey Style Designer is far to be perfect. It works, but it is not a pleasure to work with it. However, it does its job. Right-click on the TEdit component. From the contextual menu, choose Edit Custom Style (general information about styles and the style editor can be found at http://docwiki.embarcadero.com/RADStudio/XE6/en/FireMonkey_Style_Designer and http://docwiki.embarcadero.com/RADStudio/XE6/en/Editing_a_FireMonkey_Style). Delphi opens a new tab that contains the FSD. However, to work with it, you need the Structure pane to be visible as well (navigate to View | Structure or Shift + Alt + F11). In the Structure pane, there are all the styles used by the TEdit control. You should see a Structure pane similar to the following screenshot: The Structure pane showing the default style for the TEdit control In the Structure pane, open the editsearchstyle1 node, select the background subnode, and go to the Object Inspector. In the Object Inspector window, remove the content of the SourceLookup property. The background part of the style is TActiveStyleObject. A TActiveStyleObject style is a style that is able to show a part of an image as default and another part of the same image when the component that uses it is active, checked, focused, mouse hovered, pressed, or selected. The image to use is in the SourceLookup property. Our TEdit component must be completely transparent in every state, so we removed the value of the SourceLookup property. Now the TEdit component is completely invisible. Click on Apply and Close and run the application. As you can confirm, the edit works but it is completely transparent. Close the application. When you opened the FSD for the first time, a TStyleBook component has been automatically dropped on the form and contains all your custom styles. Double-click on it and the style designer opens again. The edit, as you saw, is transparent, but it is not usable at all. You need to see at least where to click and write. Let's add a small bottom line to the edit style, just like a small underline. To perform the next step, you require the Tool Palette window and the Structure pane visible. Here is my preferred setup for this situation: The Structure pane and the Tool Palette window are visible at the same time using the docking mechanism; you can also use the floating windows if you wish Now, search for a TLine component in the Tool Palette window. Drag-and-drop the TLine component onto the editsearchstyle1 node in the Structure pane. Yes, you have to drop a component from the Tool Palette window directly onto the Structure pane. Now, select the TLine component in the Structure Pane (do not use the FSD to select the components, you have to use the Structure pane nodes). In the Object Inspector, set the following properties: Align: alContents HitTest: False LineType: ltTop RotationAngle: 180 Opacity: 0.6 Click on Apply and Close. Run the application. Now, the text is underlined by a small black line that makes it easy to identify that the application is transparent. Stop the application. Now, you've to work on the listbox; it is still 100 percent opaque. Right-click on the ListBoxHouses option and click on Edit Custom Style. In the Structure pane, there are some new styles related to the TListBox class. Select the listboxhousesstyle1 option, open it, and select its child style, background. In the Object Inspector, change the Opacity property of the background style to 0.6. Click on Apply and Close. That's it! Run the application, write Calif in the Edit field and press Return. You should see a nice-looking application with a semitransparent user interface showing your dream houses in California (just like it was shown in the screenshot in the Getting ready section of this recipe). Are you amazed by the power of FireMonkey styles? How it works... The trick used in this recipe is simple. If you require a transparent UI, just identify which part of the style of each component is responsible to draw the background of the component. Then, put the Opacity setting to a level less than 1 (0.6 or 0.7 could be enough for most cases). Why not simply change the Opacity property of the component? Because if you change the Opacity property of the component, the whole component will be drawn with that opacity. However, you need only the background to be transparent; the inner text must be completely opaque. This is the reason why you changed the style and not the component property. In the case of the TEdit component, you completely removed the painting when you removed the SourceLookup property from TActiveStyleObject that draws the background. As a thumb rule, if you have to change the appearance of a control, check its properties. If the required customization is not possible using only the properties, then change the style. There's more… If you are new to FireMonkey styles, probably most concepts in this recipe must have been difficult to grasp. If so, check the official documentation on the Embarcadero DocWiki at the following URL: http://docwiki.embarcadero.com/RADStudio/XE6/en/Customizing_FireMonkey_Applications_with_Styles Summary In this article, we discussed ways to write enumerable types in Delphi. We also discussed how we can use styles to make our FireMonkey controls look better. Resources for Article: Further resources on this subject: Adding Graphics to the Map [Article] Application Performance [Article] Coding for the Real-time Web [Article]
Read more
  • 0
  • 0
  • 15011

article-image-creating-extension-yii-2
Packt
24 Sep 2014
22 min read
Save for later

Creating an Extension in Yii 2

Packt
24 Sep 2014
22 min read
In this article by Mark Safronov, co-author of the book Web Application Development with Yii 2 and PHP, we we'll learn to create our own extension using a simple way of installation. There is a process we have to follow, though some preparation will be needed to wire up your classes to the Yii application. The whole article will be devoted to this process. (For more resources related to this topic, see here.) Extension idea So, how are we going to extend the Yii 2 framework as an example for this article? Let's become vile this time and make a malicious extension, which will provide a sort of phishing backdoor for us. Never do exactly the thing we'll describe in this article! It'll not give you instant access to the attacked website anyway, but a skilled black hat hacker can easily get enough information to achieve total control over your application. The idea is this: our extension will provide a special route (a controller with a single action inside), which will dump the complete application configuration to the web page. Let's say it'll be reachable from the route /app-info/configuration. We cannot, however, just get the contents of the configuration file itself and that too reliably. At the point where we can attach ourselves to the application instance, the original configuration array is inaccessible, and even if it were accessible, we can't be sure about where it came from anyway. So, we'll inspect the runtime status of the application and return the most important pieces of information we can fetch at the stage of the controller action resolution. That's the exact payload we want to introduce. public function actionConfiguration()    {        $app = Yii::$app;        $config = [            'components' => $app->components,            'basePath' => $app->basePath,            'params' => $app->params,            'aliases' => Yii::$aliases        ];        return yiihelpersJson::encode($config);    } The preceding code is the core of the extension and is assumed in the following sections. In fact, if you know the value of the basePath setting of the application, a list of its aliases, settings for the components (among which the DB connection may reside), and all custom parameters that developers set manually, you can map the target application quite reliably. Given that you know all the credentials this way, you have an enormous amount of highly valuable information about the application now. All you need to do now is make the user install this extension. Creating the extension contents Our plan is as follows: We will develop our extension in a folder, which is different from our example CRM application. This extension will be named yii2-malicious, to be consistent with the naming of other Yii 2 extensions. Given the kind of payload we saw earlier, our extension will consist of a single controller and some special wiring code (which we haven't learned about yet) to automatically attach this controller to the application. Finally, to consider this subproject a true Yii 2 extension and not just some random library, we want it to be installable in the same way as other Yii 2 extensions. Preparing the boilerplate code for the extension Let's make a separate directory, initialize the Git repository there, and add the AppInfoController to it. In the bash command line, it can be achieved by the following commands: $ mkdir yii2-malicious && cd $_$ git init$ > AppInfoController.php Inside the AppInfoController.php file, we'll write the usual boilerplate code for the Yii 2 controller as follows: namespace malicious;use yiiwebController;class AppInfoController extends Controller{// Action here} Put the action defined in the preceding code snippet inside this controller and we're done with it. Note the namespace: it is not the same as the folder this controller is in, and this is not according to our usual auto-loading rules. We will explore later in this article that this is not an issue because of how Yii 2 treats the auto-loading of classes from extensions. Now this controller needs to be wired to the application somehow. We already know that the application has a special property called controllerMap, in which we can manually attach controller classes. However, how do we do this automatically, better yet, right at the application startup time? Yii 2 has a special feature called bootstrapping to support exactly this: to attach some activity at the beginning of the application lifetime, though not at the very beginning but before handling the request for sure. This feature is tightly related to the extensions concept in Yii 2, so it's a perfect time to explain it. FEATURE – bootstrapping To explain the bootstrapping concept in short, you can declare some components of the application in the yiibaseApplication::$bootstrap property. They'll be properly instantiated at the start of the application. If any of these components implement the BootstrapInterface interface, its bootstrap() method will be called, so you'll get the application initialization enhancement for free. Let's elaborate on this. The yiibaseApplication::$bootstrap property holds the array of generic values that you tell the framework to initialize beforehand. It's basically an improvement over the preload concept from Yii 1.x. You can specify four kinds of values to initialize as follows: The ID of an application component The ID of some module A class name A configuration array If it's the ID of a component, this component is fully initialized. If it's the ID of a module, this module is fully initialized. It matters greatly because Yii 2 has lazy loading employed on the components and modules system, and they are usually initialized only when explicitly referenced. Being bootstrapped means to them that their initialization, regardless of whether it's slow or resource-consuming, always happens, and happens always at the start of the application. If you have a component and a module with identical IDs, then the component will be initialized and the module will not be initialized! If the value being mentioned in the bootstrap property is a class name or configuration array, then the instance of the class in question is created using the yiiBaseYii::createObject() facility. The instance created will be thrown away immediately if it doesn't implement the yiibaseBootstrapInterface interface. If it does, its bootstrap() method will be called. Then, the object will be thrown away. So, what's the effect of this bootstrapping feature? We already used this feature while installing the debug extension. We had to bootstrap the debug module using its ID, for it to be able to attach the event handler so that we would get the debug toolbar at the bottom of each page of our web application. This feature is indispensable if you need to be sure that some activity will always take place at the start of the application lifetime. The BootstrapInterface interface is basically the incarnation of a command pattern. By implementing this interface, we gain the ability to attach any activity, not necessarily bound to the component or module, to the application initialization. FEATURE – extension registering The bootstrapping feature is repeated in the handling of the yiibaseApplication::$extensions property. This property is the only place where the concept of extension can be seen in the Yii framework. Extensions in this property are described as a list of arrays, and each of them should have the following fields: name: This field will be with the name of the extension. version: This field will be with the extension's version (nothing will really check it, so it's only for reference). bootstrap: This field will be with the data for this extension's Bootstrap. This field is filled with the same elements as that of Yii::$app->bootstrap described previously and has the same semantics. alias: This field will be with the mapping from Yii 2 path aliases to real directory paths. When the application registers the extension, it does two things in the following order: It registers the aliases from the extension, using the Yii::setAlias() method. It initializes the thing mentioned in the bootstrap of the extension in exactly the same way we described in the previous section. Note that the extensions' bootstraps are processed before the application's bootstraps. Registering aliases is crucial to the whole concept of extension in Yii 2. It's because of the Yii 2 PSR-4 compatible autoloader. Here is the quote from the documentation block for the yiiBaseYii::autoload() method: If the class is namespaced (e.g. yiibaseComponent), it will attempt to include the file associated with the corresponding path alias (e.g. @yii/base/Component.php). This autoloader allows loading classes that follow the PSR-4 standard and have its top-level namespace or sub-namespaces defined as path aliases. The PSR-4 standard is available online at http://www.php-fig.org/psr/psr-4/. Given that behavior, the alias setting of the extension is basically a way to tell the autoloader the name of the top-level namespace of the classes in your extension code base. Let's say you have the following value of the alias setting of your extension: "alias" => ["@companyname/extensionname" => "/some/absolute/path"] If you have the /some/absolute/path/subdirectory/ClassName.php file, and, according to PSR-4 rules, it contains the class whose fully qualified name is companynameextensionnamesubdirectoryClassName, Yii 2 will be able to autoload this class without problems. Making the bootstrap for our extension – hideous attachment of a controller We have a controller already prepared in our extension. Now we want this controller to be automatically attached to the application under attack when the extension is processed. This is achievable using the bootstrapping feature we just learned. Let's create the maliciousBootstrap class for this cause inside the code base of our extension, with the following boilerplate code: <?phpnamespace malicious;use yiibaseBootstrapInterface;class Bootstrap implements BootstrapInterface{/** @param yiiwebApplication $app */public function bootstrap($app){// Controller addition will be here.}} With this preparation, the bootstrap() method will be called at the start of the application, provided we wire everything up correctly. But first, we should consider how we manipulate the application to make use of our controller. This is easy, really, because there's the yiiwebApplication::$controllerMap property (don't forget that it's inherited from yiibaseModule, though). We'll just do the following inside the bootstrap() method: $app->controllerMap['app-info'] = 'maliciousAppInfoController'; We will rely on the composer and Yii 2 autoloaders to actually find maliciousAppInfoController. Just imagine that you can do anything inside the bootstrap. For example, you can open the CURL connection with some botnet and send the accumulated application information there. Never believe random extensions on the Web. This actually concludes what we need to do to complete our extension. All that's left now is to make our extension installable in the same way as other Yii 2 extensions we were using up until now. If you need to attach this malicious extension to your application manually, and you have a folder that holds the code base of the extension at the path /some/filesystem/path, then all you need to do is to write the following code inside the application configuration:  'extensions' => array_merge((require __DIR__ . '/../vendor/yiisoft/extensions.php'),['maliciousapp-info' => ['name' => 'Application Information Dumper','version' => '1.0.0','bootstrap' => 'maliciousBootstrap','alias' => ['@malicious' =>'/some/filesystem/path']// that's the path to extension]]) Please note the exact way of specifying the extensions setting. We're merging the contents of the extensions.php file supplied by the Yii 2 distribution from composer and our own manual definition of the extension. This extensions.php file is what allows Yiisoft to distribute the extensions in such a way that you are able to install them by a simple, single invocation of a require composer command. Let's learn now what we need to do to repeat this feature. Making the extension installable as... erm, extension First, to make it clear, we are talking here only about the situation when Yii 2 is installed by composer, and we want our extension to be installable through the composer as well. This gives us the baseline under all of our assumptions. Let's see the extensions that we need to install: Gii the code generator The Twitter Bootstrap extension The Debug extension The SwiftMailer extension We can install all of these extensions using composer. We introduce the extensions.php file reference when we install the Gii extension. Have a look at the following code: 'extensions' => (require __DIR__ . '/../vendor/yiisoft/extensions.php') If we open the vendor/yiisoft/extensions.php file (given that all extensions from the preceding list were installed) and look at its contents, we'll see the following code (note that in your installation, it can be different): <?php $vendorDir = dirname(__DIR__); return array ( 'yiisoft/yii2-bootstrap' => array ( 'name' => 'yiisoft/yii2-bootstrap', 'version' => '9999999-dev', 'alias' => array ( '@yii/bootstrap' => $vendorDir . '/yiisoft/yii2-bootstrap', ), ), 'yiisoft/yii2-swiftmailer' => array ( 'name' => 'yiisoft/yii2-swiftmailer', 'version' => '9999999-dev', 'alias' => array ( '@yii/swiftmailer' => $vendorDir . ' /yiisoft/yii2-swiftmailer', ), ), 'yiisoft/yii2-debug' => array ( 'name' => 'yiisoft/yii2-debug', 'version' => '9999999-dev', 'alias' => array ( '@yii/debug' => $vendorDir . '/yiisoft/yii2-debug', ), ), 'yiisoft/yii2-gii' => array ( 'name' => 'yiisoft/yii2-gii', 'version' => '9999999-dev', 'alias' => array ( '@yii/gii' => $vendorDir . '/yiisoft/yii2-gii', ), ), ); One extension was highlighted to stand out from the others. So, what does all this mean to us? First, it means that Yii 2 somehow generates the required configuration snippet automatically when you install the extension's composer package Second, it means that each extension provided by the Yii 2 framework distribution will ultimately be registered in the extensions setting of the application Third, all the classes in the extensions are made available in the main application code base by the carefully crafted alias settings inside the extension configuration Fourth, ultimately, easy installation of Yii 2 extensions is made possible by some integration between the Yii framework and the composer distribution system The magic is hidden inside the composer.json manifest of the extensions built into Yii 2. The details about the structure of this manifest are written in the documentation of composer, which is available at https://getcomposer.org/doc/04-schema.md. We'll need only one field, though, and that is type. Yii 2 employs a special type of composer package, named yii2-extension. If you check the manifests of yii2-debug, yii2-swiftmail and other extensions, you'll see that they all have the following line inside: "type": "yii2-extension", Normally composer will not understand that this type of package is to be installed. But the main yii2 package, containing the framework itself, depends on the special auxiliary yii2-composer package: "require": {… other requirements ..."yiisoft/yii2-composer": "*", This package provides Composer Custom Installer (read about it at https://getcomposer.org/doc/articles/custom-installers.md), which enables this package type. The whole point in the yii2-extension package type is to automatically update the extensions.php file with the information from the extension's manifest file. Basically, all we need to do now is to craft the correct composer.json manifest file inside the extension's code base. Let's write it step by step. Preparing the correct composer.json manifest We first need a block with an identity. Have a look at the following lines of code: "name": "malicious/app-info","version": "1.0.0","description": "Example extension which reveals importantinformation about the application","keywords": ["yii2", "application-info", "example-extension"],"license": "CC-0", Technically, we must provide only name. Even version can be omitted if our package meets two prerequisites: It is distributed from some version control system repository, such as the Git repository It has tags in this repository, correctly identifying the versions in the commit history And we do not want to bother with it right now. Next, we need to depend on the Yii 2 framework just in case. Normally, users will install the extension after the framework is already in place, but in the case of the extension already being listed in the require section of composer.json, among other things, we cannot be sure about the exact ordering of the require statements, so it's better (and easier) to just declare dependency explicitly as follows: "require": {"yiisoft/yii2": "*"}, Then, we must provide the type as follows: "type": "yii2-extension", After this, for the Yii 2 extension installer, we have to provide two additional blocks; autoload will be used to correctly fill the alias section of the extension configuration. Have a look at the following code: "autoload": {"psr-4": {"malicious\": ""}}, What we basically mean is that our classes are laid out according to PSR-4 rules in such a way that the classes in the malicious namespace are placed right inside the root folder. The second block is extra, in which we tell the installer that we want to declare a bootstrap section for the extension configuration: "extra": {"bootstrap": "malicious\Bootstrap"}, Our manifest file is complete now. Commit everything to the version control system: $ git commit -a -m "Added the Composer manifest file to repo" Now, we'll add the tag at last, corresponding to the version we declared as follows: $ git tag 1.0.0 We already mentioned earlier the purpose for which we're doing this. All that's left is to tell the composer from where to fetch the extension contents. Configuring the repositories We need to configure some kind of repository for the extension now so that it is installable. The easiest way is to use the Packagist service, available at https://packagist.org/, which has seamless integration with composer. It has the following pro and con: Pro: You don't need to declare anything additional in the composer.json file of the application you want to attach the extension to Con: You must have a public VCS repository (either Git, SVN, or Mercurial) where your extension is published In our case, where we are just in fact learning about how to install things using composer, we certainly do not want to make our extension public. Do not use Packagist for the extension example we are building in this article. Let's recall our goal. Our goal is to be able to install our extension by calling the following command at the root of the code base of some Yii 2 application: $ php composer.phar require "malicious/app-info:*" After that, we should see something like the following screenshot after requesting the /app-info/configuration route: This corresponds to the following structure (the screenshot is from the http://jsonviewer.stack.hu/ web service): Put the extension to some public repository, for example, GitHub, and register a package at Packagist. This command will then work without any preparation in the composer.json manifest file of the target application. But in our case, we will not make this extension public, and so we have two options left for us. The first option, which is perfectly suited to our learning cause, is to use the archived package directly. For this, you have to add the repositories section to composer.json in the code base of the application you want to add the extension to: "repositories": [// definitions of repositories for the packages required by thisapplication] To specify the repository for the package that should be installed from the ZIP archive, you have to grab the entire contents of the composer.json manifest file of this package (in our case, our malicious/app-info extension) and put them as an element of the repositories section, verbatim. This is the most complex way to set up the composer package requirement, but this way, you can depend on absolutely any folder with files (packaged into an archive). Of course, the contents of composer.json of the extension do not specify the actual location of the extension's files. You have to add this to repositories manually. In the end, you should have the following additional section inside the composer.json manifest file of the target application: "repositories": [{"type": "package","package": {// … skipping whatever were copied verbatim from the composer.jsonof extension..."dist": {"url": "/home/vagrant/malicious.zip", // example filelocation"type": "zip"}}}] This way, we specify the location of the package in the filesystem of the same machine and tell the composer that this package is a ZIP archive. Now, you should just zip the contents of the yii2-malicious folder we have created for the extension, put them somewhere at the target machine, and provide the correct URL. Please note that it's necessary to archive only the contents of the extension and not the folder itself. After this, you run composer on the machine that really has this URL accessible (you can use http:// type of URLs, of course, too), and then you get the following response from composer: To check that Yii 2 really installed the extension, you can open the file vendor/yiisoft/extensions.php and check whether it contains the following block now: 'malicious/app-info' =>array ('name' => 'malicious/app-info','version' => '1.0.0.0','alias' =>array ('@malicious' => $vendorDir . '/malicious/app-info',),'bootstrap' => 'malicious\Bootstrap',), (The indentation was preserved as is from the actual file.) If this block is indeed there, then all you need to do is open the /app-info/configuration route and see whether it reports JSON to you. It should. The pros and cons of the file-based installation are as follows: Pros Cons You can specify any file as long as it is reachable by some URL. The ZIP archive management capabilities exist on virtually any kind of platform today. There is too much work in the composer.json manifest file of the target application. The requirement to copy the entire manifest to the repositories section is overwhelming and leads to code duplication. You don't need to set up any version control system repository. It's of dubious benefit though. The manifest from the extension package will not be processed at all. This means that you cannot just strip the entry in repositories, leaving only the dist and name sections there, because the Yii 2 installer will not be able to get to the autoloader and extra sections. The last method is to use the local version control system repository. We already have everything committed to the Git repository, and we have the correct tag placed here, corresponding to the version we declared in the manifest. This is everything we need to prepare inside the extension itself. Now, we need to modify the target application's manifest to add the repositories section in the same way we did previously, but this time we will introduce a lot less code there: "repositories": [{"type": "git","url": "/home/vagrant/yii2-malicious/" // put your own URLhere}] All that's needed from you is to specify the correct URL to the Git repository of the extension we were preparing at the beginning of this article. After you specify this repository in the target application's composer manifest, you can just issue the desired command: $ php composer.phar require "malicious/app-info:1.0.0" Everything will be installed as usual. Confirm the successful installation again by having a look at the contents of vendor/yiisoft/extensions.php and by accessing the /app-info/configuration route in the application. The pros and con of the repository-based installation are as follows: Pro: Relatively little code to write in the application's manifest. Pro: You don't need to really publish your extension (or the package in general). In some settings, it's really useful, for closed-source software, for example. Con: You still have to meddle with the manifest of the application itself, which can be out of your control and in this case, you'll have to guide your users about how to install your extension, which is not good for PR. In short, the following pieces inside the composer.json manifest turn the arbitrary composer package into the Yii 2 extension: First, we tell composer to use the special Yii 2 installer for packages as follows: "type": "yii2-extension" Then, we tell the Yii 2 extension installer where the bootstrap for the extension (if any) is as follows: "extra": {"bootstrap": "<Fully qualified name>"} Next, we tell the Yii 2 extension installer how to prepare aliases for your extension so that classes can be autoloaded as follows: "autoloader": {"psr-4": { "namespace": "<folder path>"}} Finally, we add the explicit requirement of the Yii 2 framework itself in the following code, so we'll be sure that the Yii 2 extension installer will be installed at all: "require": {"yiisoft/yii2": "*"} Everything else is the details of the installation of any other composer package, which you can read in the official composer documentation. Summary In this article, we looked at how Yii 2 implements its extensions so that they're easily installable by a single composer invocation and can be automatically attached to the application afterwards. We learned that this required some level of integration between these two systems, Yii 2 and composer, and in turn this requires some additional preparation from you as a developer of the extension. We used a really silly, even a bit dangerous, example for extension. It was for three reasons: The extension was fun to make (we hope) We showed that using bootstrap mechanics, we can basically automatically wire up the pieces of the extension to the target application without any need for elaborate manual installation instructions We showed the potential danger in installing random extensions from the Web, as an extension can run absolutely arbitrary code right at the application initialization and more than that, at each request made to the application We have discussed three methods of distribution of composer packages, which also apply to the Yii 2 extensions. The general rule of thumb is this: if you want your extension to be publicly available, just use the Packagist service. In any other case, use the local repositories, as you can use both local filesystem paths and web URLs. We looked at the option to attach the extension completely manually, not using the composer installation at all. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [Article] Meet Yii [Article] Yii 1.1: Using Zii Components [Article]
Read more
  • 0
  • 0
  • 6494

Packt
23 Sep 2014
8 min read
Save for later

JavaScript Promises – Why Should I Care?

Packt
23 Sep 2014
8 min read
This article by Rami Sarieddine, the author of the book JavaScript Promises Essentials, introduces JavaScript promises and reasons why should you care about promises when comparing it to the common way of doing things asynchronously. (For more resources related to this topic, see here.) Why should I care about promises? What do promises have to do with all of this? Well, let's start by defining promises. "A promise represents the eventual result of an asynchronous operation." - Promises/A+ specification, http://promisesaplus.com/ So a promise object represents a value that may not be available yet, but will be resolved at some point in the future. Promises have states and at any point in time, can be in one of the following: Pending: The promise's value is not yet determined and its state may transition to either fulfilled or rejected. Fulfilled: The promise was fulfilled with success and now has a value that must not change. Additionally, it must not transition to any other state from the fulfilled state. Rejected: The promise is returned from a failed operation and must have a reason for failure. This reason must not change and the promise must not transition to any other state from this state. A promise may only move from the pending state to the fulfilled state or from the pending state to the rejected state. However, once a promise is either fulfilled or rejected, it must not transition to any other state and its value cannot change because it is immutable. The immutable characteristic of promises is super important. It helps evade undesired side-effects from listeners, which can cause unexpected changes in behavior, and in turn allows promises to be passed to other functions without affecting the caller function. From an API perspective, a promise is defined as an object that has a function as the value for the property then. The promise object has a primary then method that returns a new promise object. Its syntax will look like the following: then(onFulfilled, onRejected); The following two arguments are basically callback functions that will be called for completion of a promise: onFulfilled: This argument is called when a promise is fulfilled onRejected: This argument is called when a promise has failed Bear in mind that both the arguments are optional. Moreover, non-function values for the arguments will be ignored, so it might be a good practice to always check whether the arguments passed are functions before executing them. It is worth noting that when you research promises, you might come across two definitions/specs: one based on Promises/A+ and an older one based on Promises/A by CommonJS. The new promise returned by the then method is resolved when the given onFulfilled or onRejected callback is completed. The implementation reflects a very simple concept: when a promise is fulfilled, it has a value, and when it is rejected, it has a reason. The following is a simple example of how to use a promise: promise.then(function (value){    var result = JSON.parse(data).value;    }, function (reason) {    alert(error.message); }); The fact that the value returned from the callback handler is the fulfillment value for the returned promise allows promise operations to be chained together. Hence, we will have something like the following: $.getJSON('example.json').then(JSON.parse).then(function(response) {    alert("Hello There: ", response); }); Well, you guessed it right! What the previous code sample does is chain the promise returned from the first then() call to the second then() call. Hence, the getJSON method will return a promise that contains the value of the JSON returned. Thus, we can call a then method on it, following which we will invoke another then call on the promise returned. This promise includes the value of JSON.parse. Eventually, we will take that value and display it in an alert. Can't I just use a callback? Callbacks are simple! We pass a function, it gets invoked at some point in the future, and we get to do things asynchronously. Additionally, callbacks are lightweight since we need to add extra libraries. Using functions as higher-order objects is already built into the JavaScript programming language; hence, we do not require additional code to use it. However, asynchronous programming in JavaScript can quickly become complicated if not dealt with care, especially callbacks. Callback functions tend to become difficult to maintain and debug when nested within long lines of code. Additionally, the use of anonymous inline functions in a callback can make reading the call stack very tedious. Also, when it comes to debugging, exceptions that are thrown back from within a deeply nested set of callbacks might not propagate properly up to the function that initiated the call within the chain, which makes it difficult to determine exactly where the error is located. Moreover, it is hard to structure a code that is based around callbacks as they roll out a messy code like a snowball. We will end up having something like the following code sample but on a much larger scale: function readJSON(filename, callback) {    fs.readFile(filename, function (err, result) {        if (err) return callback(err);        try {            result = JSON.parse(result, function (err, result) {                fun.readAsync(result, function (err, result) {                    alert("I'm inside this loop now");                    });                 alert("I'm here now");                });            } catch (ex) {        return callback(ex);        }    callback(null, result);    }); } The sample code in the previous example is an excerpt of a deeply nested code that is sometimes referred to as the pyramid of doom. Such a code, when it grows, will make it a daunting task to read through, structure, maintain, and debug. Promises, on the other hand, provide an abstraction to manage interactions with asynchronous APIs and present a more managed approach towards asynchronous programming in JavaScript when compared to the use of callbacks and event handlers. We can think of promises as more of a pattern for asynchronous programming. Simply put, the promises pattern will allow the asynchronous programming to move from the continuation-passing style that is widespread to one where the functions we call return a value, called a promise that will represent the eventual results of that particular operation. It allows you to go from: call1(function (value1) {    call2(value1, function(value2) {        call3(value2, function(value3) {            call4(value3, function(value4) {                // execute some code            });        });    }); }); To: Promise.asynCall(promisedStep1) .then(promisedStep2) .then(promisedStep3) .then(promisedStep4) .then(function (value4) {    // execute some code }); If we list the properties that make promises easier to work with, they will be as follows: It is easier to read as in cleaner method signatures It allows us to attach more than one callback to a single promise It allows for values and errors to be passed along, and bubble up to the caller function It allows for chaining of promises What we can observe is that promises bring functional composition to synchronous capabilities by returning values, and error bubbling by throwing exceptions to the asynchronous functions. These are capabilities that we take for granted in the synchronous world. The following sample (dummy) code shows the difference between using callbacks to compose asynchronous functions communicating with each other and promises to do the same. The following is an example with callbacks:    $("#testInpt").click(function () {        firstCallBack(function (param) {            getValues(param, function (result) {                alert(result);            });        });    }); The following is a code example that converts the previous callback functions to promise-returning functions that can be chained to each other:    $("#testInpt").clickPromise() // promise-returning function    .then(firstCallBack)    .then(getValues)    .then(alert); As we have seen, the flat chains that promises provide allow us to have code that is easier to read and eventually easier to maintain when compared to the traditional callback approach. Summary Promises are a pattern that allows for a standardized approach in asynchronous programming, which enables developers to write asynchronous code that is more readable and maintainable. Resources for Article: Further resources on this subject: REST – Where It Begins [Article] Uploading multiple files [Article] Grunt in Action [Article]
Read more
  • 0
  • 0
  • 1915
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-installing-rhev-manager
Packt
23 Sep 2014
15 min read
Save for later

Installing RHEV Manager

Packt
23 Sep 2014
15 min read
This article by Pradeep Subramanian, author of Getting Started with Red Hat Enterprise Virtualization, describes setting up RHEV-M, including the installation, initial configuration, and connection to the administrator and user portal of the manager web interface. (For more resources related to this topic, see here.) Setting up the RHEL operating system for the manager Prior to starting the installation of RHEV-M, please make sure all the prerequisite are met to set up RHEV environment. Consider the following when setting up RHEL OS for RHEV-M: Install Red Hat Enterprise Linux 6 with latest minor update of 5, and during package selection step, select minimal or basic server as an option. Don't select any custom package. The hostname should be set to FQDN. Set up basic networking; use of static IP is recommended for your manager with a default gateway and primary and secondary DNS client configured. SELinux and iptables are enabled by default as part of the operating system installation. For more security, it's highly recommended to keep it on. To disable SELinux on Red Hat Enterprise Linux, please run the following command as the root user: # setenforce Permissive This command will switch off SELinux enforcement temporarily until the machine is rebooted. If you would like to permanently disable it, edit /etc/sysconfig/selinux and enter SELINUX=disabled. Registering with Red Hat Network To install RHEV-M, you need to first register your manager machine with Red Hat Network and subscribe to the relevant channels. You need to connect your machine to the Red Hat Network with a valid account with access to the relevant software channels to register your machine and deploy RHEV-M packages. If your environment does not have access to the Red Hat Network, you can perform an offline installation of RHEV-M. For more information, please refer to https://access.redhat.com/site/articles/216983. To register your machine with the Red Hat Network using RHN Classic, please run the following command from the shell and follow the onscreen instructions: # rhn_register This command will register your manager machine to the parent channel of your operating system version. It's strongly recommended to use Red Hat Subscription Manager to register and subscribe to the relevant channel. To use Red Hat Subscription Manager, please refer to the Subscribing to the Red Hat Enterprise Virtualization Manager Channels using Subscription Manager section from the RHEV 3.3 installation guide at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html/Installation_Guide/index.html. After successful registration of your manager machine to the Red Hat Network, subscribe the manager machine using the following command to subscribe to the relevant channels. Then download and install the manager-related software packages. The following command will prompt you to enter your Red Hat Network login credentials: # rhn-channel -a -c rhel-x86_64-server-6-rhevm-3.3 -c rhel-x86_64-server-supplementary-6 -c jbappplatform-6-x86_64-server-6-rpm Username: "yourrhnlogin" Password: XXXX To cross-check whether your manager machine is registered with Red Hat Network and subscribed to the relevant channels, please run the following command. This will return all the channels mentioned earlier plus the base channel of your operating system version, as shown in the following yum command output: # yum repolist repo id repo name status jbappplatform-6-x86_64-server-6-rpm Red Hat JBoss EAP (v 6) for 6Server x86_64 1,415 rhel-x86_64-server-6 Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64) 12,662 rhel-x86_64-server-6-rhevm-3.3 Red Hat Enterprise Virtualization Manager (v.3.3 x86_64) 164 rhel-x86_64-server-supplementary-6 RHEL Server Supplementary (v. 6 64-bit x86_64) 370 You are now ready to start downloading and installing the software required to set up and run your RHEV-M. Installing the RHEV-Manager packages Update your base Red Hat Enterprise Linux operating system to the latest up-to-date version by running the following command: # yum -y upgrade Reboot the machine if the upgrade installed the latest version of the kernel. After a successful upgrade, run the following command to install RHEV-M and its dependent packages: # yum -y install rhevm There are a few conditions you need to consider before configuring RHEV-M: We need a working DNS for forward and reverse lookup of FQDN. We are going to use the Red Hat IdM server configured with the DNS role in the rest of the article for domain name resolution of the entire virtualization infrastructure. Refer to the Red Hat Identity Management Guide for more information on how to add forward and reverse zone records to the configured IdM DNS at https://access.redhat.com/documentation/en- US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/Working_with_DNS.html. You can't install Identity Management software on the same box where the manager is going to be deployed due to some package conflicts. To store ISO images of operating systems in order to create a virtual machine, you need Network File Server (NFS) with a planned NFS export path. If your manager machine has sufficient storage space to host all your ISOs, you can set up the ISO domain while configuring the manager to set up the NFS share automatically through the installer to store all your ISO images. If you have an existing NFS server, it's recommended to use a dedicated export for the ISO domain to store the ISO images instead of using the manager server to serve the NFS service. Here we are going to use a dedicated local mount point named /rhev-iso-library on the RHEV Manager box to store our ISO images to provision the virtual machine. Note that the mount point should be empty and only contain the user and group ownership and permission sets before running the installer: # chown -R 36:36 /rhev-iso-library ; chmod 0755 /rhev-iso-library It will also be useful to have the following information at hand: Ports to be used for HTTP and HTTPS communication. FQDN of the manager. A reverse lookup is performed on your hostname. At the time of writing this article, RHEV supported only the PostgreSQL database for use with RHEV-M. You can use a local database or remote database setup. Here we are going to use the local database. In the case of a remote database setup, keep all database login credentials ready. Please refer to the Preparing a PostgreSQL Database for Use with Red Hat Enterprise Virtualization Manager section for detailed information on setting up a remote database to use with manager at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html-single/Installation_Guide/index.html#Preparing_a_Postgres_Database_Server_for_use_with_Red_Hat_Enterprise_Virtualization_Manager. Password for internal admin account of RHEV-M. Organization name for the RHEV-M SSL certificate. Leave the default storage type to NFS for the initial default data center. We will create a new data center in the latter stage of our setup. Provide the file system path and display name for NFS ISO library configuration, so that the manager will configure NFS of the supplied filesystem path, and make it visible by the display name under the Storage tab section on administration portal of RHEV-M. Running the initial engine setup Once you're prepared with all the answers to the questions we discussed in the previous section, it's time to run the initial configuration script called engine-setup to perform the initial configuration and setting up of RHEV-M. The installer will ask you several questions, which have been discussed above, and based on your input, it will configure your RHEV-M. Leave the default settings as they are and press Enter if you feel the installer's default answers are appropriate to your setup. Once the installer takes in all your input, it will ask you for the final confirmation of your supplied configuration setting; type in OK and press Enter to continue the setup. For better understanding, please refer to the following output of the engine-setup installer while setting up a lab for this article. Log in to manager as the root user, and from the shell of your Manager machine, run the following engine-setup command: # engine-setup Once you execute this command, engine-setup performs the following set of tasks on the system: First check whether any updates are available for this system. Accept the default Yes and proceed further: Checking for product updates and update if available. Enter Default Yes. Set the hostname of the RHEV-M system. The administration portal web access will get bound to the FQDN entered here: Host fully qualified DNS name of this server [rhevmanager.example.com]: Set up the firewall rule on the manager system, and this will backup your existing firewall rule configured on the manager system if any: Do you want Setup to configure the firewall? (Yes, No) [Yes]: No Local will set up the PostgreSQL database instance on the manager system; optionally, you can choose Remote to use the existing remote PostgreSQL database instance to use with manager: Where is the database located? (Local, Remote) [Local]: If you selected Local, you will get an option to customize the PostgreSQL database setup by choosing the relevant option: Would you like Setup to automatically configure PostgreSQL, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Set up the internal admin user password to access the manager web interface for initial setup of the virtualization infrastructure: Engine admin password: Confirm engine admin password: RHEV supports the use of clusters to manage Gluster storage bricks in addition to virtualization hosts. Choosing both will give the flexibility to use hypervisor hosts to host virtual machines as well as other sets of hypervisor hosts to manage Gluster storage bricks in your RHEV environment: Application mode (Both, Virt, Gluster) [Both]: Engine installer creates a data center named Default as part of the initial setup. The following step will ask you to select the type of storage to be used with the data center. Mixing storage domains of different types is not supported in the 3.3 release, but it is supported in the latest 3.4 release. Choose the default NFS option and proceed further. We are going to create a new data center, using the administration portal, from scratch after the engine setup and then select the storage type as ISCSI for the rest of this article: Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]: The manager uses certificates to communicate securely with its hosts. Provide your organization's name for the certificate: Organization name for certificate [example.com]: The manager uses the Apache web server to present a landing page to users. The engine-setup script can make the landing page of the manager the default page presented by Apache: Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]: By default, external SSL (HTTPS) communications with the manager are secured with the self-signed certificate created in the PKI configuration stage for secure communication with hosts. Another certificate may be chosen for external HTTPS connections without affecting how the manager communicates with hosts: Setup can configure apache to use SSL using a certificate issued from the internal CA. Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Choose Yes to set up an NFS share on the manager system and provide the export path to be used to dump the ISO images in a later part. Finally, label the ISO domain with a name that will be unique and easily identifiable on the Storage tab of the administration portal: Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]: Local ISO domain path [/var/lib/exports/iso]: /rhev-iso-library Local ISO domain name [ISO_DOMAIN]: ISO_Datastore The engine-setup script can optionally configure a WebSocket proxy server in order to allow users to connect with virtual machines via the noVNC or HTML 5 consoles: Configure WebSocket Proxy on this machine? (Yes, No) [Yes]: The final step will ask you to provide proxy server credentials if the manager system is hosted behind the proxy server to access the Internet. RHEV supports vRed Hat Access Plugin, which will help you collect the logs and open a service request with Red Hat Global Support Services from the administration portal of the manager: Would you like transactions from the Red Hat Access Plugin sent from the RHEV Manager to be brokered through a proxy server? (Yes, No) [No]: Finally, if you feel all the input and configurations are satisfactory, press Enter to complete the engine setup. It will show you the configuration preview, and if you feel satisfied, press OK: Please confirm installation settings (OK, Cancel) [OK]: After the successful setup of RHEV-M, you can see the summary, which will show various bits of information such as how to access the admin portal of RHEV-M, the installed logs, the configured iptables firewall, the required ports, and so on. Connecting to the admin and user portal 006C Now access the admin portal, as shown in the following screenshot, using the following URLs: http://rhevmanager.example.com:80/ovirt-engine https://rhevmanager.example.com:443/ovirt-engine Use the user admin and password specified during the setup to log in to the oVirt engine (also called RHEV-M). Click on Administration Portal and log in using the credentials you set up for the admin account during the engine setup. Then click on User Portal and log in using the credentials you set up for the admin account during the engine setup. You will see a difference in the portal with a very trimmed-down user interface that is useful for self-service. We will see how to integrate the manager with other active directory services and efficiently use the user portal for self-service consumption later in the article. RHEV reporting RHEV bundles two optional components. The first is the history management database, which holds the historical information of various virtualization resources such as data centers, clusters, hosts, virtual machines, and others so that any other external application can consume them for reporting. The second optional component is the customized JasperServer and JasperReports. JasperServer is an open source reporting tool capable of generating and exporting reports in various formats such as PDF, Word, and CSV for end user consumption. To enable the reporting functionality, you need to install the specific components that we discussed. For simplicity, we are installing both the components at one go using the command described in the following section. Installing the RHEV history database and report server To install the history database and report servers, execute the following command: # yum install rhevm-dwh rhevm-reports Once you have installed the reporting components, you need to start with setting up the RHEV history database by using the following command: # rhevm-dwh-setup This will momentarily stop and start the oVirt engine service during the setup. Further, it will ask you to create a read-only user account to access the history database. Create it if you want to allow remote access to the history database and follow the onscreen instructions and finish the setup. Once the oVirt engine history database (also known as the RHEV Manager history database) is created, move on to setting up the report server. From the RHEV-M server, run the following command to set up the reporting server: # rhevm-reports-setup #setup will prompt to restart ovirt-engine service. In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service? (yes|no): #The command then performs a number of actions before prompting you to set the password for the Red Hat Enterprise Virtualization Manager Reports administrative users (rhevm-admin and superuser) Please choose a password for the reports admin user(s) (rhevm-admin and superuser): Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Follow the onscreen instructions and enter Yes to stop the oVirt-engine and set up a password for the default internal super user account called rhevm-admin to access and manage the report portal and proceed further with the setup. Note that this user is different from the internal admin account we set up during the engine setup of RHEV-M. The rhevm-admin user is used only for accessing and managing the report portal, not for the admin or user portal. Accessing the RHEV report portal After the successful installation and initial configuration setup of the report portal, you can access it by https://rhevmanager.example.com/rhevm-reports/login.html from your client machine. You can also access the report portal from the manager web interface by clicking on the Reports Portal hyperlink, which will redirect you to the report portal. Log in with rhevm-admin and the password credentials we set while running the RHEV-M report setup script in the previous section to generate reports and create and manage users to access the report portal. Initially, most of the report portal is empty since we are yet to set up and create the virtual infrastructure. It will take at least a day or two after the complete virtualization infrastructure setup to view various resources and generate reports. To learn more about using and gathering reports using the report portal, please refer to Reports, History Database Reports, and Dashboards at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html/Administration_Guide/chap-Reports_History_Database_Reports_and_Dashboards.html. Summary In this article, we discussed setting up our basic virtualization infrastructure, which includes installing RHEV-M and report server and connecting to various portals such as admin, user, and report portal. Resources for Article: Further resources on this subject: Designing a XenDesktop® Site [article] XenMobile™ Solutions Bundle [article] Installing Virtual Desktop Agent – server OS and desktop OS [article]
Read more
  • 0
  • 0
  • 3307

article-image-setting-software-infrastructure-cloud
Packt
23 Sep 2014
42 min read
Save for later

Setting up of Software Infrastructure on the Cloud

Packt
23 Sep 2014
42 min read
In this article by Roberto Freato, author of Microsoft Azure Development Cookbook, we mix some of the recipes of of this book, to build a complete overview of what we need to set up a software infrastructure on the cloud. (For more resources related to this topic, see here.) Microsoft Azure is Microsoft’s Platform for Cloud Computing. It provides developers with elastic building blocks to build scalable applications. Those building blocks are services for web hosting, storage, computation, connectivity, and more, which are usable as stand-alone services or mixed together to build advanced scenarios. Building an application with Microsoft Azure could really mean choosing the appropriate services and mix them together to run our application. We start by creating a SQL Database. Creating a SQL Database server and database SQL Database is a multitenanted database system in which many distinct databases are hosted on many physical servers managed by Microsoft. SQL Database administrators have no control over the physical provisioning of a database to a particular physical server. Indeed, to maintain high availability, a primary and two secondary copies of each SQL Database are stored on separate physical servers, and users can't have any control over them. Consequently, SQL Database does not provide a way for the administrator to specify the physical layout of a database and its logs when creating a SQL Database. The administrator merely has to provide a name, maximum size, and service tier for the database. A SQL Database server is the administrative and security boundary for a collection of SQL Databases hosted in a single Azure region. All connections to a database hosted by the server go through the service endpoint provided by the SQL Database server. At the time of writing this book, an Azure subscription can create up to six SQL Database servers, each of which can host up to 150 databases (including the master database). These are soft limits that can be increased by arrangement with Microsoft Support. From a billing perspective, only the database unit is counted towards, as the server unit is just a container. However, to avoid a waste of unused resources, an empty server is automatically deleted after 90 days of non-hosting user databases. The SQL Database server is provisioned on the Azure Portal. The Region as well as the administrator login and password must be specified during the provisioning process. After the SQL Database server has been provisioned, the firewall rules used to restrict access to the databases associated with the SQL Database server can be modified on the Azure Portal, using Transact SQL or the SQL Database Service Management REST API. The result of the provisioning process is a SQL Database server identified by a fully -qualified DNS name such as SERVER_NAME.database.windows.net, where SERVER_NAME is an automatically generated (random and unique) string that differentiates this SQL Database server from any other. The provisioning process also creates the master database for the SQL Database server and adds a user and associated login for the administrator specified during the provisioning process. This user has the rights to create other databases associated with this SQL Database server as well as any logins needed to access them. Remember to distinguish between the SQL Database service and the famous SQL Server engine available on the Azure platform, but as a plain installation over VMs. In the latter case, you will continue to own the complete control of the instance that runs the SQL Server, the installation details, and the effort to maintain it during the time. Also, remember that the SQL Server virtual machines have a different pricing from the standard VMs due to their license costs. An administrator can create a SQL Database either on the Azure Portal or using the CREATE DATABASE Transact SQL statement. At the time of this writing this book, SQL Database runs in the following two different modes: Version 1.0: This refers to Web or Business Editions Version 2.0: This refers to Basic, Standard, or Premium service tiers with performance levels The first version is deprecating in few months. Web Edition was designed for small databases under 5 GB and Business Edition for databases of 10 GB and larger (up to 150 GB). There is no difference in these editions other than the maximum size and billing increment. The second version introduced service tiers (the equivalent of Editions) with an additional parameter (performance level) that sets the amount of dedicated resource to a given database. The new service tiers (Basic, Standard, and Premium) introduced a lot of advanced features such as active/passive Geo-replication, point-in-time restore, cross-region copy, and restore. Different performance levels have different limits such as the Database Throughput Unit (DTU) and the maximum DB size. An updated list of service tiers and performance levels can be found at http://msdn.microsoft.com/en-us/library/dn741336.aspx. Once a SQL Database has been created, the ALTER DATABASE Transact SQL statement can be used to alter either the edition or the maximum size of the database. The maximum size is important as the database is made read only once it reaches that size (with the The database has reached its size quota error message and number 40544). In this recipe, we'll learn how to create a SQL Database server and a database using the Azure Portal and T-SQL. Getting Ready To perform the majority of operations of the recipe, just a plain internet browser is needed. However, to connect directly to the server, we will use the SQL Server Management Studio (also available in the Express version). How to do it... First, we are going to create a SQL Database server using the Azure Portal. We will do this using the following steps: On the Azure Portal, go to the SQL DATABASES section and then select the SERVERS tab. In the bottom menu, select Add. In the CREATE SERVER window, provide an administrator login and password. Select a Subscription and Region that will host the server. To enable access from the other service in WA to the server, you can check the Allow Windows Azure Services to access the server checkbox; this is a special firewall rule that allows the 0.0.0.0 to 0.0.0.0 IP range. Confirm and wait a few seconds to complete the operation. After that, using the Azure Portal,.go to the SQL DATABASES section and then the SERVERS tab. Select the previously created server by clicking on its name. In the server page, go to the DATABASES tab. In the bottom menu, click on Add; then, after clicking on NEW SQL DATABASE, the CUSTOM CREATE window will open. Specify a name and select the Web Edition. Set the maximum database size to 5 GB and leave the COLLATION dropdown to its default. SQL Database fees are charged differently if you are using the Web/Business Edition rather than the Basic/Standard/Premium service tiers. The most updated pricing scheme for SQL Database can be found at http://azure.microsoft.com/en-us/pricing/details/sql-database/ Verify the server on which you are creating the database (it is specified correctly in the SERVER dropdown) and confirm it. Alternatively, using Transact SQL, launch Microsoft SQL Server Management Studio and open the Connect to Server window. In the Server name field, specify the fully qualified name of the newly created SQL Database server in the following form: serverName.database.windows.net. Choose the SQL Server Authentication method. Specify the administrative username and password associated earlier. Click on the Options button and specify the Encrypt connection checkbox. This setting is particularly critical while accessing a remote SQL Database. Without encryption, a malicious user could extract all the information to log in to the database himself, from the network traffic. Specifying the Encrypt connection flag, we are telling the client to connect only if a valid certificate is found on the server side. Optionally check the Remember password checkbox and connect to the server. To connect remotely to the server, a firewall rule should be created. In the Object Explorer window, locate the server you connected to, navigate to Databases | System Databases folder, and then right-click on the master database and select New Query. 18. Copy and execute this query and wait for its completion:. CREATE DATABASE DATABASE_NAME ( MAXSIZE = 1 GB ) How it works... The first part is pretty straightforward. In steps 1 and 2, we go to the SQL Database section of the Azure portal, locating the tab to manage the servers. In step 3, we fill the online popup with the administrative login details, and in step 4, we select a Region to place the SQL Database server. As a server (with its database) is located in a Region, it is not possible to automatically migrate it to another Region. After the creation of the container resource (the server), we create the SQL Database by adding a new database to the newly created server, as stated from steps 6 to 9. In step 10, we can optionally change the default collation of the database and its maximum size. In the last part, we use the SQL Server Management Studio (SSMS) (step 12) to connect to the remote SQL Database instance. We notice that even without a database, there is a default database (the master one) we can connect to. After we set up the parameters in step 13, 14, and 15, we enable the encryption requirement for the connection. Remember to always set the encryption before connecting or listing the databases of a remote endpoint, as every single operation without encryption consists of plain credentials sent over the network. In step 17, we connect to the server if it grants access to our IP. Finally, in step 18, we open a contextual query window, and in step 19, we execute the creation query, specifying a maximum size for the database. Note that the Database Edition should be specified in the CREATE DATABASE query as well. By default, the Web Edition is used. To override this, the following query can be used: CREATE DATABASE MyDB ( Edition='Basic' ) There's more… We can also use the web-based Management Portal to perform various operations against the SQL Database, such as invoking Transact SQL commands, altering tables, viewing occupancy, and monitoring the performance. We will launch the Management Portal using the following steps: Obtain the name of the SQL Database server that contains the SQL Database. Go to https://serverName.database.windows.net. In the Database fields, enter the database name (leave it empty to connect to the master database). Fill the Username and Password fields with the login information and confirm. Increasing the size of a database We can use the ALTER DATABASE command to increase the size (or the Edition, with the Edition parameter) of a SQL Database by connecting to the master database and invoking the following Transact SQL command: ALTER DATABASE DATABASE_NAME MODIFY ( MAXSIZE = 5 GB ) We must use one of the allowable database sizes. Connecting to a SQL Database with Entity Framework The Azure SQL Database is a SQL Server-like fully managed relation database engine. In many other recipes, we showed you how to connect transparently to the SQL Database, as we did in the SQL Server, as the SQL Database has the same TDS protocol as its on-premise brethren. In addition, using the raw ADO.NET could lead to some of the following issues: Hardcoded SQL: In spite of the fact that a developer should always write good code and make no errors, there is the finite possibility to make mistake while writing stringified SQL, which will not be verified at design time and might lead to runtime issues. These kind of errors lead to runtime errors, as everything that stays in the quotation marks compiles. The solution is to reduce every line of code to a command that is compile time safe. Type safety: As ADO.NET components were designed to provide a common layer of abstraction to developers who connect against several different data sources, the interfaces provided are generic for the retrieval of values from the fields of a data row. A developer could make a mistake by casting a field to the wrong data type, and they will realize it only at run time. The solution is to reduce the mapping of table fields to the correct data type at compile time. Long repetitive actions: We can always write our own wrapper to reduce the code replication in the application, but using a high-level library, such as the ORM, can take off most of the repetitive work to open a connection, read data, and so on. Entity Framework hides the complexity of the data access layer and provides developers with an intermediate abstraction layer to let them operate on a collection of objects instead of rows of tables. The power of the ORM itself is enhanced by the usage of LINQ, a library of extension methods that, in synergy with the language capabilities (anonymous types, expression trees, lambda expressions, and so on), makes the DB access easier and less error prone than in the past. This recipe is an introduction to Entity Framework, the ORM of Microsoft, in conjunction with the Azure SQL Database. Getting Ready The database used in this recipe is the Northwind sample database of Microsoft. It can be downloaded from CodePlex at http://northwinddatabase.codeplex.com/. How to do it… We are going to connect to the SQL Database using Entity Framework and perform various operations on data. We will do this using the following steps: Add a new class named EFConnectionExample to the project. Add a new ADO.NET Entity Data Model named Northwind.edmx to the project; the Entity Data Model Wizard window will open. Choose Generate from database in the Choose Model Contents step. In the Choose Your Data Connection step, select the Northwind connection from the dropdown or create a new connection if it is not shown. Save the connection settings in the App.config file for later use and name the setting NorthwindEntities. If VS asks for the version of EF to use, select the most recent one. In the last step, choose the object to include in the model. Select the Tables, Views, Stored Procedures, and Functions checkboxes. Add the following method, retrieving every CompanyName, to the class: private IEnumerable<string> NamesOfCustomerCompanies() { using (var ctx = new NorthwindEntities()) { return ctx.Customers .Select(p => p.CompanyName).ToArray(); } } Add the following method, updating every customer located in Italy, to the class: private void UpdateItalians() { using (var ctx = new NorthwindEntities()) { ctx.Customers.Where(p => p.Country == "Italy") .ToList().ForEach(p => p.City = "Milan"); ctx.SaveChanges(); } } Add the following method, inserting a new order for the first Italian company alphabetically, to the class: private int FirstItalianPlaceOrder() { using (var ctx = new NorthwindEntities()) { var order = new Orders() { EmployeeID = 1, OrderDate = DateTime.UtcNow, ShipAddress = "My Address", ShipCity = "Milan", ShipCountry = "Italy", ShipName = "Good Ship", ShipPostalCode = "20100" }; ctx.Customers.Where(p => p.Country == "Italy") .OrderBy(p=>p.CompanyName) .First().Orders.Add(order); ctx.SaveChanges(); return order.OrderID; } } Add the following method, removing the previously inserted order, to the class: private void RemoveTheFunnyOrder(int orderId) { using (var ctx = new NorthwindEntities()) { var order = ctx.Orders .FirstOrDefault(p => p.OrderID == orderId); if (order != null) ctx.Orders.Remove(order); ctx.SaveChanges(); } } Add the following method, using the methods added earlier, to the class: public static void UseEFConnectionExample() { var example = new EFConnectionExample(); var customers=example.NamesOfCustomerCompanies(); foreach (var customer in customers) { Console.WriteLine(customer); } example.UpdateItalians(); var order=example.FirstItalianPlaceOrder(); example.RemoveTheFunnyOrder(order); } How it works… This recipe uses EF to connect and operate on a SQL Database. In step 1, we create a class that contains the recipe, and in step 2, we open the wizard for the creation of Entity Data Model (EDMX). We create the model, starting from an existing database in step 3 (it is also possible to write our own model and then persist it in an empty database), and then, we select the connection in step 4. In fact, there is no reference in the entire code to the Windows Azure SQL Database. The only reference should be in the App.config settings created in step 5; this can be changed to point to a SQL Server instance, leaving the code untouched. The last step of the EDMX creation consists of concrete mapping between the relational table and the object model, as shown in step 6. This method generates the code classes that map the table schema, using strong types and collections referred to as Navigation properties. It is also possible to start from the code, writing the classes that could represent the database schema. This method is known as Code-First. In step 7, we ask for every CompanyName of the Customers table. Every table in EF is represented by DbSet<Type>, where Type is the class of the entity. In steps 7 and 8, Customers is DbSet<Customers>, and we use a lambda expression to project (select) a property field and another one to create a filter (where) based on a property value. The SaveChanges method in step 8 persists to the database the changes detected in the disconnected object data model. This magic is one of the purposes of an ORM tool. In step 9, we use the navigation property (relationship) between a Customers object and the Orders collection (table) to add a new order with sample data. We use the OrderBy extension method to order the results by the specified property, and finally, we save the newly created item. Even now, EF automatically keeps track of the newly added item. Additionally, after the SaveChanges method, EF populates the identity field of Order (OrderID) with the actual value created by the database engine. In step 10, we use the previously obtained OrderID to remove the corresponding order from the database. We use the FirstOrDefault() method to test the existence of the ID, and then, we remove the resulting object like we removed an object from a plain old collection. In step 11, we use the methods created to run the demo and show the results. Deploying a Website Creating a Website is an administrative task, which is performed in the Azure Portal in the same way we provision every other building block. The Website created is like a "deployment slot", or better, "web space", since the abstraction given to the user is exactly that. Azure Websites does not require additional knowledge compared to an old-school hosting provider, where FTP was the standard for the deployment process. Actually, FTP is just one of the supported deployment methods in Websites, since Web Deploy is probably the best choice for several scenarios. Web Deploy is a Microsoft technology used for copying files and provisioning additional content and configuration to integrate the deployment process. Web Deploy runs on HTTP and HTTPS with basic (username and password) authentication. This makes it a good choice in networks where FTP is forbidden or the firewall rules are strict. Some time ago, Microsoft introduced the concept of Publish Profile, an XML file containing all the available deployment endpoints of a particular website that, if given to Visual Studio or Web Matrix, could make the deployment easier. Every Azure Website comes with a publish profile with unique credentials, so one can distribute it to developers without giving them grants on the Azure Subscription. Web Matrix is a client tool of Microsoft, and it is useful to edit live sites directly from an intuitive GUI. It uses Web Deploy to provide access to the remote filesystem as to perform remote changes. In Websites, we can host several websites on the same server farm, making administration easier and isolating the environment from the neighborhood. Moreover, virtual directories can be defined from the Azure Portal, enabling complex scenarios or making migrations easier. In this recipe, we will cope with the deployment process, using FTP and Web Deploy with some variants. Getting ready This recipe assumes we have and FTP client installed on the local machine (for example, FileZilla) and, of course, a valid Azure Subscription. We also need Visual Studio 2013 with the latest Azure SDK installed (at the time of writing, SDK Version 2.3). How to do it… We are going to create a new Website, create a new ASP.NET project, deploy it through FTP and Web Deploy, and also use virtual directories. We do this as follows: Create a new Website in the Azure Portal, specifying the following details: The URL prefix (that is, TestWebSite) is set to [prefix].azurewebsites.net The Web Hosting Plan (create a new one) The Region/Location (select West Europe) Click on the newly created Website and go to the Dashboard tab. Click on Download the publish profile and save it on the local computer. Open Visual Studio and create a new ASP.NET web application named TestWebSite, with an empty template and web forms' references. Add a sample Default.aspx page to the project and paste into it the following HTML: <h1>Root Application</h1> Press F5 and test whether the web application is displayed correctly. Create a local publish target. Right-click on the project and select Publish. Select Custom and specify Local Folder. In the Publish method, select File System and provide a local folder where Visual Studio will save files. Then click on Publish to complete. Publish via FTP. Open FileZilla and then open the Publish profile (saved in step 3) with a text editor. Locate the FTP endpoint and specify the following: publishUrl as the Host field username as the Username field userPWD as the Password field Delete the hostingstart.html file that is already present on the remote space. When we create a new Azure Website, there is a single HTML file in the root folder by default, which is served to the clients as the default page. By leaving it in the Website, the file could be served after users' deployments as well if no valid default documents are found. Drag-and-drop all the contents of the local folder with the binaries to the remote folder, then run the website. Publish via Web Deploy. Right-click on the Project and select Publish. Go to the Publish Web wizard start and select Import, providing the previously downloaded Publish Profile file. When Visual Studio reads the Web Deploy settings, it populates the next window. Click on Confirm and Publish the web application. Create an additional virtual directory. Go to the Configure tab of the Website on the Azure Portal. At the bottom, in the virtual applications and directories, add the following: /app01 with the path siteapp01 Mark it as Application Open the Publish Profile file and duplicate the <publishProfile> tag with the method FTP, then edit the following: Add the suffix App01 to profileName Replace wwwroot with app01 in publishUrl Create a new ASP.NET web application called TestWebSiteApp01 and create a new Default.aspx page in it with the following code: <h1>App01 Application</h1> Right-click on the TestWebSiteApp01 project and Publish. Select Import and provide the edited Publish Profile file. In the first step of the Publish Web wizard (go back if necessary), select the App01 method and select Publish. Run the Website's virtual application by appending the /app01 suffix to the site URL. How it works... In step 1, we create the Website on the Azure Portal, specifying the minimal set of parameters. If the existing web hosting plan is selected, the Website will start in the specified tier. In the recipe, by specifying a new web hosting plan, the Website is created in the free tier with some limitations in configuration. The recipe uses the Azure Portal located at https://manage.windowsazure.com. However, the new Azure Portal will be at https://portal.azure.com. New features will be probably added only in the new Portal. In steps 2 and 3, we download the Publish Profile file, which is an XML containing the various endpoints to publish the Website. At the time of writing, Web Deploy and FTP are supported by default. In steps 4, 5, and 6, we create a new ASP.NET web application with a sample ASPX page and run it locally. In steps 7, 8, and 9, we publish the binaries of the Website, without source code files, into a local folder somewhere in the local machine. This unit of deployment (the folder) can be sent across the wire via FTP, as we do in steps 10 to 13 using the credentials and the hostname available in the Publish Profile file. In steps 14 to 16, we use the Publish Profile file directly from Visual Studio, which recognizes the different methods of deployment and suggests Web Deploy as the default one. If we perform the steps 10-13, with steps14-16 we overwrite the existing deployment. Actually, Web Deploy compares the target files with the ones to deploy, making the deployment incremental for those file that have been modified or added. This is extremely useful to avoid unnecessary transfers and to save bandwidth. In steps 17 and 18, we configure a new Virtual Application, specifying its name and location. We can use an FTP client to browse the root folder of a website endpoint, since there are several folders such as wwwroot, locks, diagnostics, and deployments. In step 19, we manually edit the Publish Profile file to support a second FTP endpoint, pointing to the new folder of the Virtual Application. Visual Studio will correctly understand this while parsing the file again in step 22, showing the new deployment option. Finally, we verify whether there are two applications: one on the root folder / and one on the /app01 alias. There's more… Suppose we need to edit the website on the fly, editing a CSS of JS file or editing the HTML somewhere. We can do this using Web Matrix, which is available from the Azure Portal itself through a ClickOnce installation: Go to the Dashboard tab of the Website and click on WebMatrix at the bottom. Follow the instructions to install the software (if not yet installed) and, when it opens, select Edit live site directly (the magic is done through the Publish Profile file and Web Deploy). In the left-side tree, edit the Default.aspx file, and then save and run the Website again. Azure Websites gallery Since Azure Websites is a PaaS service, with no lock-in or particular knowledge or framework required to run it, it can hosts several Open Source CMS in different languages. Azure provides a set of built-in web applications to choose while creating a new website. This is probably not the best choice for production environments; however, for testing or development purposes, it should be a faster option than starting from scratch. Wizards have been, for a while, the primary resources for developers to quickly start off projects and speed up the process of creating complex environments. However, the Websites gallery creates instances of well-known CMS with predefined configurations. Instead, production environments are manually crafted, customizing each aspect of the installation. To create a new Website using the gallery, proceed as follows: Create a new Website, specifying from gallery. Select the web application to deploy and follow the optional configuration steps. If we create some resources (like databases) while using the gallery, they will be linked to the site in the Linked Resources tab. Building a simple cache for applications Azure Cache is a managed service with (at the time of writing this book) the following three offerings: Basic: This service has a unit size of 128 MB, up to 1 GB with one named cache (the default one) Standard: This service has a unit size of 1 GB, up to 10 GB with 10 named caches and support for notifications Premium: This service has a unit size of 5 GB, up to 150 GB with ten named caches, support for notifications, and high availability Different offerings have different unit prices, and remember that when changing from one offering to another, all the cache data is lost. In all offerings, users can define the items' expiration. The Cache service listens to a specific TCP port. Accessing it from a .NET application is quite simple, with the Microsoft ApplicationServer Caching library available on NuGet. In the Microsoft.ApplicationServer.Caching namespace, the following are all the classes that are needed to operate: DataCacheFactory: This class is responsible for instantiating the Cache proxies to interpret the configuration settings. DataCache: This class is responsible for the read/write operation against the cache endpoint. DataCacheFactoryConfiguration: This is the model class of the configuration settings of a cache factory. Its usage is optional as cache can be configured in the App/Web.config file in a specific configuration section. Azure Cache is a key-value cache. We can insert and even get complex objects with arbitrary tree depth using string keys to locate them. The importance of the key is critical, as in a single named cache, only one object can exist for a given key. The architects and developers should have the proper strategy in place to deal with unique (and hierarchical) names. Getting ready This recipe assumes that we have a valid Azure Cache endpoint of the standard type. We need the standard type because we use multiple named caches, and in later recipes, we use notifications. We can create a Standard Cache endpoint of 1 GB via PowerShell. Perform the following steps to create the Standard Cache endpoint : Open the Azure PowerShell and type Add-AzureAccount. A popup window might appear. Type your credentials connected to a valid Azure subscription and continue. Optionally, select the proper Subscription, if not the default one. Type this command to create a new Cache endpoint, replacing myCache with the proper unique name: New-AzureManagedCache -Name myCache -Location "West Europe" -Sku Standard -Memory 1GB After waiting for some minutes until the endpoint is ready, go to the Azure Portal and look for the Manage Keys section to get one of the two Access Keys of the Cache endpoint. In the Configure section of the Cache endpoint, a cache named default is created by default. In addition, create two named caches with the following parameters: Expiry Policy: Absolute Time: 10 Notifications: Enabled Expiry Policy could be Absolute (the default expiration time or the one set by the user is absolute, regardless of how many times the item has been accessed), Sliding (each time the item has been accessed, the expiration timer resets), or Never (items do not expire). This Azure Cache endpoint is now available in the Management Portal, and it will be used in the entire article. How to do it… We are going to create a DataCache instance through a code-based configuration. We will perform simple operations with Add, Get, Put, and Append/Prepend, using a secondary-named cache to transfer all the contents of the primary one. We will do this by performing the following steps: Add a new class named BuildingSimpleCacheExample to the project. Install the Microsoft.WindowsAzure.Caching NuGet package. Add the following using statement to the top of the class file: using Microsoft.ApplicationServer.Caching; Add the following private members to the class: private DataCacheFactory factory = null; private DataCache cache = null; Add the following constructor to the class: public BuildingSimpleCacheExample(string ep, string token,string cacheName) { DataCacheFactoryConfiguration config = new DataCacheFactoryConfiguration(); config.AutoDiscoverProperty = new DataCacheAutoDiscoverProperty(true, ep); config.SecurityProperties = new DataCacheSecurity(token, true); factory = new DataCacheFactory(config); cache = factory.GetCache(cacheName); } Add the following method, creating a palindrome string into the cache: public void CreatePalindromeInCache() { var objKey = "StringArray"; cache.Put(objKey, ""); char letter = 'A'; for (int i = 0; i < 10; i++) { cache.Append(objKey, char.ConvertFromUtf32((letter+i))); cache.Prepend(objKey, char.ConvertFromUtf32((letter + i))); } Console.WriteLine(cache.Get(objKey)); } Add the following method, adding an item into the cache to analyze its subsequent retrievals: public void AddAndAnalyze() { var randomKey = DateTime.Now.Ticks.ToString(); var value="Cached string"; cache.Add(randomKey, value); DataCacheItem cacheItem = cache.GetCacheItem(randomKey); Console.WriteLine(string.Format( "Item stored in {0} region with {1} expiration", cacheItem.RegionName,cacheItem.Timeout)); cache.Put(randomKey, value, TimeSpan.FromSeconds(60)); cacheItem = cache.GetCacheItem(randomKey); Console.WriteLine(string.Format( "Item stored in {0} region with {1} expiration", cacheItem.RegionName, cacheItem.Timeout)); var version = cacheItem.Version; var obj = cache.GetIfNewer(randomKey, ref version); if (obj == null) { //No updates } } Add the following method, transferring the contents of the cache named initially into a second one: public void BackupToDestination(string destCacheName) { var destCache = factory.GetCache(destCacheName); var dump = cache.GetSystemRegions() .SelectMany(p => cache.GetObjectsInRegion(p)) .ToDictionary(p=>p.Key,p=>p.Value); foreach (var item in dump) { destCache.Put(item.Key, item.Value); } } Add the following method to clear the cache named first: public void ClearCache() { cache.Clear(); } Add the following method, using the methods added earlier, to the class: public static void RunExample() { var cacheName = "[named cache 1]"; var backupCache = "[named cache 2]"; string endpoint = "[cache endpoint]"; string token = "[cache token/key]"; BuildingSimpleCacheExample example = new BuildingSimpleCacheExample(endpoint, token, cacheName); example.CreatePalindromeInCache(); example.AddAndAnalyze(); example.BackupToDestination(backupCache); example.ClearCache(); } How it works... From steps 1 to 3, we set up the class. In step 4, we add private members to store the DataCacheFactory object used to create the DataCache object to access the Cache service. In the constructor that we add in step 5, we initialize the DataCacheFactory object using a configuration model class (DataCacheFactoryConfiguration). This strategy is for code-based initialization whenever settings cannot stay in the App.config/Web.config file. In step 6, we use the Put() method to write an empty string into the StringArray bucket. We then use the Append() and Prepend() methods, designed to concatenate strings to existing strings, to build a palindrome string in the memory cache. This sample does not make any sense in real-world scenarios, and we must pay attention to some of the following issues: Writing an empty string into the cache is somehow useless. Each Append() or Prepend() operation travels on TCP to the cache and goes back. Though it is very simple, it requires resources, and we should always try to consolidate calls. In step 7, we use the Add() method to add a string to the cache. The difference between the Add() and Put() methods is that the first method throws an exception if the item already exists, while the second one always overwrites the existing value (or writes it for the first time). GetCacheItem() returns a DataCacheItem object, which wraps the value together with other metadata properties, such as the following: CacheName: This is the named cache where the object is stored. Key: This is the key of the associated bucket. RegionName (user defined or system defined): This is the region of the cache where the object is stored. Size: This is the size of the object stored. Tags: These are the optional tags of the object, if it is located in a user-defined region. Timeout: This is the current timeout before the object would expire. Version: This is the version of the object. This is a DataCacheItemVersion object whose properties are not accessible due to their modifier. However, it is not important to access this property, as the Version object is used as a token against the Cache service to implement the optimistic concurrency. As for the timestamp value, its semantic can stay hidden from developers. The first Add() method does not specify a timeout for the object, leaving the default global expiration timeout, while the next Put() method does, as we can check in the next Get() method. We finally ask the cache about the object with the GetIfNewer() method, passing the latest version token we have. This conditional Get method returns null if the object we own is already the latest one. In step 8, we list all the keys of the first named cache, using the GetSystemRegions() method (to first list the system-defined regions), and for each region, we ask for their objects, copying them into the second named cache. In step 9, we clear all the contents of the first cache. In step 10, we call the methods added earlier, specifying the Cache endpoint to connect to and the token/password, along with the two named caches in use. Replace [named cache 1], [named cache 2], [cache endpoint], and [cache token/key] with actual values. There's more… Code-based configuration is useful when the settings stay in a different place as compared to the default config files for .NET. It is not a best practice to hardcode them, so this is the standard way to declare them in the App.config file: <configSections> <section name="dataCacheClients" type="Microsoft.ApplicationServer.Caching.DataCacheClientsSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere" /> </configSections> The XML mentioned earlier declares a custom section, which should be as follows: <dataCacheClients> <dataCacheClient name="[name of cache]"> <autoDiscover isEnabled="true" identifier="[domain of cache]" /> <securityProperties mode="Message" sslEnabled="true"> <messageSecurity authorizationInfo="[token of endpoint]" /> </securityProperties> </dataCacheClient> </dataCacheClients> In the upcoming recipes, we will use this convention to set up the DataCache objects. ASP.NET Support With almost no effort, the Azure Cache can be used as Output Cache in ASP.NET to save the session state. To enable this, in addition to the configuration mentioned earlier, we need to include those declarations in the <system.web> section as follows: <sessionState mode="Custom" customProvider="AFCacheSessionStateProvider"> <providers> <add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheSessionState"/> </providers> </sessionState> <caching> <outputCache defaultProvider="AFCacheOutputCacheProvider"> <providers> <add name="AFCacheOutputCacheProvider" type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="[named cache]" dataCacheClientName="[name of cache]" applicationName="AFCacheOutputCache" /> </providers> </outputCache> </caching> The difference between [name of cache] and [named cache] is as follows: The [name of cache] part is a friendly name of the cache client declared above an alias. The [named cache] part is the named cache created into the Azure Cache service. Connecting to the Azure Storage service In an Azure Cloud Service, the storage account name and access key are stored in the service configuration file. By convention, the account name and access key for data access are provided in a setting named DataConnectionString. The account name and access key needed for Azure Diagnostics must be provided in a setting named Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString. The DataConnectionString setting must be declared in the ConfigurationSettings section of the service definition file. However, unlike other settings, the connection string setting for Azure Diagnostics is implicitly defined when the Diagnostics module is specified in the Imports section of the service definition file. Consequently, it must not be specified in the ConfigurationSettings section. A best practice is to use different storage accounts for application data and diagnostic data. This reduces the possibility of application data access being throttled by competition for concurrent writes from the diagnostics monitor. What is Throttling? In shared services, where the same resources are shared between tenants, limiting the concurrent access to them is critical to provide service availability. If a client misuses the service or, better, generates a huge amount of traffic, other tenants pointing to the same shared resource could experience unavailability. Throttling (also known as Traffic Control plus Request Cutting) is one of the most adopted solutions that is solving this issue. It also provides a security boundary between application data and diagnostics data, as diagnostics data might be accessed by individuals who should have no access to application data. In the Azure Storage library, access to the storage service is through one of the Client classes. There is one Client class for each Blob service, Queue service, and Table service; they are CloudBlobClient, CloudQueueClient, and CloudTableClient, respectively. Instances of these classes store the pertinent endpoint as well as the account name and access key. The CloudBlobClient class provides methods to access containers, list their contents, and get references to containers and blobs. The CloudQueueClient class provides methods to list queues and get a reference to the CloudQueue instance that is used as an entry point to the Queue service functionality. The CloudTableClient class provides methods to manage tables and get the TableServiceContext instance that is used to access the WCF Data Services functionality while accessing the Table service. Note that the CloudBlobClient, CloudQueueClient, and CloudTableClient instances are not thread safe, so distinct instances should be used when accessing these services concurrently. The client classes must be initialized with the account name, access key, as well as the appropriate storage service endpoint. The Microsoft.WindowsAzure namespace has several helper classes. The StorageCredential class initializes an instance from an account name and access key or from a shared access signature. In this recipe, we'll learn how to use the CloudBlobClient, CloudQueueClient, and CloudTableClient instances to connect to the storage service. Getting ready This recipe assumes that the application's configuration file contains the following: <appSettings> <add key="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/> <add key="AccountName" value="{ACCOUNT_NAME}"/> <add key="AccountKey" value="{ACCOUNT_KEY}"/> </appSettings> We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the storage account name and access key, respectively. We are not working in a Cloud Service but in a simple console application. Storage services, like many other building blocks of Azure, can also be used separately from on-premise environments. How to do it... We are going to connect to the Table service, the Blob service, and the Queue service, and perform a simple operation on each. We will do this using the following steps: Add a new class named ConnectingToStorageExample to the project. Add the following using statements to the top of the class file: using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Blob; using Microsoft.WindowsAzure.Storage.Queue; using Microsoft.WindowsAzure.Storage.Table; using Microsoft.WindowsAzure.Storage.Auth; using System.Configuration; The System.Configuration assembly should be added via the Add Reference action onto the project, as it is not included in most of the project templates of Visual Studio. Add the following method, connecting the blob service, to the class: private static void UseCloudStorageAccountExtensions() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DataConnectionString"]); CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient(); CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference( "{NAME}"); cloudBlobContainer.CreateIfNotExists(); } Add the following method, connecting the Table service, to the class: private static void UseCredentials() { string accountName = ConfigurationManager.AppSettings[ "AccountName"]; string accountKey = ConfigurationManager.AppSettings[ "AccountKey"]; StorageCredentials storageCredentials = new StorageCredentials( accountName, accountKey); CloudStorageAccount cloudStorageAccount = new CloudStorageAccount(storageCredentials, true); CloudTableClient tableClient = new CloudTableClient( cloudStorageAccount.TableEndpoint, storageCredentials); CloudTable table = tableClient.GetTableReference("{NAME}"); table.CreateIfNotExists(); } Add the following method, connecting the Queue service, to the class: private static void UseCredentialsWithUri() { string accountName = ConfigurationManager.AppSettings[ "AccountName"]; string accountKey = ConfigurationManager.AppSettings[ "AccountKey"]; StorageCredentials storageCredentials = new StorageCredentials( accountName, accountKey); StorageUri baseUri = new StorageUri(new Uri(string.Format( "https://{0}.queue.core.windows.net/", accountName))); CloudQueueClient cloudQueueClient = new CloudQueueClient(baseUri, storageCredentials); CloudQueue cloudQueue = cloudQueueClient.GetQueueReference("{NAME}"); cloudQueue.CreateIfNotExists(); } Add the following method, using the other methods, to the class: public static void UseConnectionToStorageExample() { UseCloudStorageAccountExtensions(); UseCredentials(); UseCredentialsWithUri(); } How it works... In steps 1 and 2, we set up the class. In step 3, we implement the standard way to access the storage service using the Storage Client library. We use the static CloudStorageAccount.Parse() method to create a CloudStorageAccount instance from the value of the connection string stored in the configuration file. We then use this instance with the CreateCloudBlobClient() extension method of the CloudStorageAccount class to get the CloudBlobClient instance that we use to connect to the Blob service. We can also use this technique with the Table service and the Queue service, using the relevant extension methods, CreateCloudTableClient() and CreateCloudQueueClient(), respectively, for them. We complete this example using the CloudBlobClient instance to get a CloudBlobContainer reference to a container and then create it if it does not exist We need to replace {NAME} with the name for a container. In step 4, we create a StorageCredentials instance directly from the account name and access key. We then use this to construct a CloudStorageAccount instance, specifying that any connection should use HTTPS. Using this technique, we need to provide the Table service endpoint explicitly when creating the CloudTableClient instance. We then use this to create the table. We need to replace {NAME} with the name of a table. We can use the same technique with the Blob service and Queue service using the relevant CloudBlobClient or CloudQueueClient constructor. In step 5, we use a similar technique, except that we avoid the intermediate step of using a CloudStorageAccount instance and explicitly provide the endpoint for the Queue service. We use the CloudQueueClient instance created in this step to create the queue. We need to replace {NAME} with the name of a queue. Note that we hardcoded the endpoint for the Queue service. Though this last method is officially supported, it is not a best practice to bind our code to hardcoded strings with endpoint URIs. So, it is preferable to use one of the previous methods that hides the complexity of the URI generation at the library level. In step 6, we add a method that invokes the methods added in the earlier steps. There's more… With the general availability of the .NET Framework Version 4.5, many libraries of the CLR have been added with the support of asynchronous methods with the Async/Await pattern. Latest versions of the Azure Storage Library also have these overloads, which are useful while developing mobile applications, and fast web APIs. They are generally useful when it is needed to combine the task execution model into our applications. Almost each long-running method of the library has its corresponding methodAsync() method to be called as follows: await cloudQueue.CreateIfNotExistsAsync(); In the rest of the book, we will continue to use the standard, synchronous pattern. Adding messages to a Storage queue The CloudQueue class in the Azure Storage library provides both synchronous and asynchronous methods to add a message to a queue. A message comprises up to 64 KB bytes of data (48 KB if encoded in Base64). By default, the Storage library Base64 encodes message content to ensure that the request payload containing the message is valid XML. This encoding adds overhead that reduces the actual maximum size of a message. A message for a queue should not be intended to transport a big payload, since the purpose of a Queue is just messaging and not storing. If required, a user can store the payload in a Blob and use a Queue message to point to that, letting the receiver fetch the message along with the Blob from its remote location. Each message added to a queue has a time-to-live property after which it is deleted automatically. The maximum and default time-to-live value is 7 days. In this recipe, we'll learn how to add messages to a queue. Getting ready This recipe assumes the following code is in the application configuration file: <appSettings> <add key="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/> </appSettings> We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values of the account name and access key. How to do it... We are going to create a queue and add some messages to it. We do this as follows: Add a new class named AddMessagesOnStorageExample to the project. Install the WindowsAzure.Storage NuGet package and add the following assembly references to the project: System.Configuration Add the following using statements to the top of the class file: using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Queue; using System.Configuration; Add the following private member to the class: private CloudQueue cloudQueueClient; Add the following constructor to the class: public AddMessagesOnStorageExample(String queueName) { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DataConnectionString"]); CloudQueueClient cloudQueueClient = cloudStorageAccount.CreateCloudQueueClient(); cloudQueue = cloudQueueClient.GetQueueReference(queueName); cloudQueue.CreateIfNotExists(); } Add the following method to the class, adding two messages: public void AddMessages() { String content1 = "Do something"; CloudQueueMessage message1 = new CloudQueueMessage(content1); cloudQueue.AddMessage(message1); String content2 = "Do something that expires in 1 day"; CloudQueueMessage message2 = new CloudQueueMessage(content2); cloudQueue.AddMessage(message2, TimeSpan.FromDays(1.0)); String content3 = "Do something that expires in 2 hours,"+ " starting in 1 hour from now"; CloudQueueMessage message3 = new CloudQueueMessage(content3); cloudQueue.AddMessage(message2, TimeSpan.FromHours(2),TimeSpan.FromHours(1)); } Add the following method, that uses the AddMessage() method, to the class: public static void UseAddMessagesExample() { String queueName = "{QUEUE_NAME}"; AddMessagesOnStorageExample example = new AddMessagesOnStorageExample (queueName); example.AddMessages(); } How it works... In steps 1 through 3, we set up the class. In step 4, we add a private member to store the CloudQueue object used to interact with the Queue service. We initialize this in the constructor we add in step 5 where we also create the queue. In step 6, we add a method that adds three messages to a queue. We create three CloudQueueMessage objects. We add the first message to the queue with the default time-to-live of seven days, the second is added specifying an expiration of 1 day, and the third will become visible after 1 hour since its entrance in the queue, with an absolute expiration of 2 hours. Note that a client (library) exception is thrown if we specify a visibility delay higher than the absolute TTL of the message. This is naturally obvious and it is enforced at the client side, instead making a (failing) server call. In step 7, we add a method that invokes the methods we added earlier. We need to replace {QUEUE_NAME} with an appropriate name for a queue. There's more… To clear the queue from the messages we added in this recipe, we can proceed by calling the Clear() method in the CloudQueue class as follows: public void ClearQueue() { cloudQueue.Clear(); } Summary In this article, we have learned some of the recipes in order to build a complete overview of the software infrastructure that we need to set up on the cloud. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] vCloud Networks [Article] Setting Up a Test Infrastructure [Article]
Read more
  • 0
  • 0
  • 2848

article-image-windows-phone-8-applications
Packt
23 Sep 2014
17 min read
Save for later

Windows Phone 8 Applications

Packt
23 Sep 2014
17 min read
In this article by Abhishek Sur, author of Visual Studio 2013 and .NET 4.5 Expert Cookbook, we will build your first Windows Phone 8 application following the MVVM pattern. We will work with Launchers and Choosers in a Windows Phone, relational databases and persistent storage, and notifications in a Windows Phone (For more resources related to this topic, see here.) Introduction Windows Phones are the newest smart device that has come on to the market and host the Windows operating system from Microsoft. The new operating system that was recently introduced to the market significantly differs from the previous Windows mobile operating system. Microsoft has shifted gears on producing a consumer-oriented phone rather than an enterprise mobile environment. The operating system is stylish and focused on the consumer. It was built keeping a few principles in mind: Simple and light, with focus on completing primary tasks quickly Distinct typography (Segoe WP) for all its UI Smart and predefined animation for the UI Focus on content, not chrome (the whole screen is available to the application for use) Honesty in design Unlike the previous Windows Phone operating system, Windows Phone 8 is built on the same core on which Windows PC is now running. The shared core indicates that the Windows core system includes the same Windows OS, including NT Kernel, NT filesystem, and networking stack. Above the core, there is a Mobile Core specific to mobile devices, which includes components such as Multimedia, Core CLR, and IE Trident, as shown in the following screenshot: In the preceding screenshot, the Windows Phone architecture has been depicted. The Windows Core System is shared between the desktop and mobile devices. The Mobile Core is specific to mobile devices that run Windows Phone Shell, all the apps, and platform services such as background downloader/uploader and scheduler. It is important to note that even though both Windows 8 and Windows Phone 8 share the same core and most of the APIs, the implementation of APIs is different from one another. The Windows 8 APIs are considered WinRT, while Windows Phone 8 APIs are considered Windows Phone Runtime (WinPRT). Building your first Windows Phone 8 application following the MVVM pattern Windows Phone applications are generally created using either HTML5 or Silverlight. Most of the people still use the Silverlight approach as it has a full flavor of backend languages such as C# and also the JavaScript library is still in its infancy. With Silverlight or XAML, the architecture that always comes into the developer's mind is MVVM. Like all XAML-based development, Windows 8 Silverlight apps also inherently support MVVM models and hence, people tend to adopt it more often when developing Windows Phone apps. In this recipe, we are going to take a quick look at how you can use the MVVM pattern to implement an application. Getting ready Before starting to develop an application, you first need to set up your machine with the appropriate SDK, which lets you develop a Windows Phone application and also gives you an emulator to debug the application without a device. The SDK for Windows Phone 8 apps can be downloaded from Windows Phone Dev Center at http://dev.windowsphone.com. The Windows Phone SDK includes the following: Microsoft Visual Studio 2012 Express for Windows Phone Microsoft Blend 2012 Express for Windows Phone The Windows Phone Device Emulator Project templates, reference assemblies, and headers/libraries A Windows 8 PC to run Visual Studio 2012 for Windows Phone After everything has been set up for application development, you can open Visual Studio and create a Windows Phone app. When you create the project, it will first ask the target platform; choose Windows Phone 8 as the default and select OK. You need to name and create the project. How to do it... Now that the template is created, let's follow these steps to demonstrate how we can start creating an application: By default, the project template that is loaded will display a split view with the Visual Studio Designer on the left-hand side and an XAML markup on the right-hand side. The MainPage.xaml file should already be loaded with a lot of initial adjustments to support Windows Phone with factors. Microsoft makes sure that they give the best layout to the developer to start with. So the important thing that you need to look at is defining the content inside the ContentPanel property, which represents the workspace area of the page. The Visual Studio template for Windows Phone 8 already gives you a lot of hints on how to start writing your first app. The comments indicate where to start and how the project template behaves on the code edits in XAML. Now let's define some XAML designs for the page. We will create a small page and use MVVM to connect to the data. For simplicity, we use dummy data to show on screen. Let's create a login screen for the application to start with. Add a new page, call it Login.xaml, and add the following code in ContentPanel defined inside the page: <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0" VerticalAlignment="Center"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="UserId" Grid.Row="0" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Center"/> <TextBox Text="{Binding UserId, Mode=TwoWay}" Grid.Row="0"Grid.Column="1" InputScope="Text"/> <TextBlock Text="Password" Grid.Row="1" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Center"/> <PasswordBox x_Name="txtPassword" Grid.Row="1" Grid.Column="1" PasswordChanged="txtPassword_PasswordChanged"/> <Button Command="{Binding LoginCommand}" Content="Login"Grid.Row="2" Grid.Column="0" /> <Button Command="{Binding ClearCommand}" Content="Clear"Grid.Row="2" Grid.Column="1" /> </Grid> In the preceding UI Design, we added a TextBox and a PasswordBox inside ContentPanel. Each TextBox has an InputScope property, which you can define to specify the behavior of the input. We define it as Text, which specifies that the TextBox can have any textual data. The PasswordBox takes any input from the user, but shows asterisks (*) instead of the actual data. The actual data is stored in an encrypted format inside the control and can only be recovered using its Password property. We are going to follow the MVVM pattern to design the application. We create a folder named Model in the solution and put a LoginDataContext class in it. This class is used to generate and validate the login of the UI. Inherit the class from INotifyPropertyChanged, which indicates that the properties can act by binding with the corresponding DependencyProperty that exists in the control, thereby interacting to and fro with the UI. We create properties for UserName, Password, and Status, as shown in the following code: private string userid; public string UserId { get { return userid; } set { UserId = value; this.OnPropertyChanged("UserId"); } } private string password; public string Password { get { return password; } set { password = value; this.OnPropertyChanged("Password"); } } public bool Status { get; set; } You can see in the preceding code that the property setter invokes an OnPropertyChanged event. This ensures that the update on the properties is reflected in the UI control: public ICommand LoginCommand { get { return new RelayCommand((e) => { this.Status = this.UserId == "Abhishek" && this.Password == "winphone"; if (this.Status) { var rootframe = App.Current.RootVisual as PhoneApplicationFrame; rootframe.Navigate(new Uri(string.Format ("/FirstPhoneApp;component/MainPage.xaml?name={0}",this.UserId), UriKind.Relative)); } }); } } public ICommand ClearCommand { get { return new RelayCommand((e) => { this.UserId = this.Password = string.Empty; }); } } We also define two more properties of type ICommand. The UI button control implements the command pattern and uses an ICommand interface to invoke a command. The RelayCommand used on the code is an implementation of the ICommand interface, which could be used to invoke some action. Now let's bind the Text property of the TextBox in XAML with the UserId property, and make it a TwoWay binding. The binder automatically subscribes to the PropertyChanged event. When the UserId property is set and the PropertyChanged event is invoked, the UI automatically receives the invoke request of code, which updates the text in the UI. Similarly, we add two buttons and name them Login and Clear and bind them with the properties LoginCommand and ClearCommand, as shown in the following code: <Button Command="{Binding LoginCommand}" Content="Login" Grid.Row="2" Grid.Column="0" /> <Button Command="{Binding ClearCommand}" Content="Clear" Grid.Row="2" Grid.Column="1" /> In the preceding XAML, we defined the two buttons and specified a command for each of them. We create another page so that when the login is successful, we can navigate the Login page to somewhere else. Let's make use of the existing MainPage.xaml file as follows: <StackPanel x_Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28"> <TextBlock Text="MY APPLICATION" x_Name="txtApplicationDescription"Style=" {StaticResource PhoneTextNormalStyle}" Margin="12,0"/> <TextBlock Text="Enter Details" Margin="9,-7,0,0"Style=" {StaticResource PhoneTextTitle1Style}"/> </StackPanel> We add the preceding XAML to show the message that is passed from the Login screen. We create another class and name it MainDataContext. We define a property that will hold the data to be displayed on the screen. We go to Login.xaml.cs created as a code-behind of Login.xaml, create an instance of LoginDataContext, and assign it to DataContext of the page. We assign this inside the InitializeComponent method of the class, as shown in the following code: this.DataContext = new LoginDataContext(); Now, go to Properties in the Solution Explorer pane, open the WMAppManifest file, and specify Login.xaml as the Navigation page. Once this is done, if you run the application now in any of the emulators available with Visual Studio, you will see what is shown in the following screenshot: You can enter data in the UserId and Password fields and click on Login, but nothing happens. Put a breakpoint on LoginCommand and press Login again with the credentials, and you will see that the Password property is never set to anything and evaluates to null. Note that, PasswordBox in XAML does not support binding to its properties. To deal with this, we define a PasswordChanged event on PasswordBox and specify the following code: private void txtPassword_PasswordChanged(object sender, RoutedEventArgs e) { this.logindataContext.Password = txtPassword.Password; } The preceding code will ensure that the password goes properly to the ViewModel. Finally, clicking on Login, you will see Status is set to true. However, our idea is to move the page from the Login screen to MainPage.xaml. To do this, we change the LoginCommand property to navigate the page, as shown in the following code: if (this.Status) { var rootframe = App.Current.RootVisual as PhoneApplicationFrame; rootframe.Navigate(new Uri(string.Format("/FirstPhoneApp;component/MainPage.xaml?name={0}", this.UserId), UriKind.Relative)); } Each WPF app contains an ApplicationFrame class that is used to show the UI. The application frame can use the navigate method to navigate from one page to another. The navigate method uses NavigationService to redirect the page to the URL provided. Here in the code, after authentication, we pass UserId as querystring to MainPage. We design the MainPage.xaml file to include a pivot control. A pivot control is just like traditional tab controls, but looks awesome in a phone environment. Let's add the following code: <phone:Pivot> <phone:PivotItem Header="Main"> <StackPanel Orientation="Vertical"> <TextBlock Text="Choose your avatar" /> <Image x_Name="imgSelection" Source="{Binding AvatarImage}"/> <Button x_Name="btnChoosePhoto" ClickMode="Release"Content="Choose Photo" Command="{Binding ChoosePhoto}" /> </StackPanel> </phone:PivotItem> <phone:PivotItem Header="Task"> <StackPanel> <phone:LongListSelector ItemsSource="{Binding LongList}" /> </StackPanel> </phone:PivotItem> </phone:Pivot> The phone tag is referred to a namespace that has been added automatically in the header where the Pivot class exists. In the previously defined Pivot class, there are two PivotItem with headers Main and Task. When Main is selected, it allows you to choose a photo from MediaLibrary and the image is displayed on Image Control. The ChoosePhoto command defined inside MainDataContext sets the image to its source, as shown in the following code: public ICommand ChoosePhoto { get { return new RelayCommand((e) => { PhotoChooserTask pTask = new PhotoChooserTask(); pTask.Completed += pTask_Completed; pTask.Show(); }); } } void pTask_Completed(object sender, PhotoResult e) { if (e.TaskResult == TaskResult.OK) { var bitmap = new BitmapImage(); bitmap.SetSource(e.ChosenPhoto); this.AvatarImage = bitmap; } } In the preceding code, the RelayCommand that is invoked when the button is clicked uses PhotoChooserTask to select an image from MediaLibrary and that image is shown on the AvatarImage property bound to the image source. On the other hand, the other PivotItem shows LongList where the ItemsSource is bound to a long list of strings, as shown in the following code: public List<string> LongList { get { this.longList = this.longList ?? this.LoadList(); return this.longList; } } The long list can be anything, a long list that is needed to be shown in the ListBox class. How it works... Windows Phone, being an XAML-based technology, uses Silverlight to generate UI and controls supporting the Model-View-ViewModel (MVVM) pattern. Each of the controls present in the Windows Phone environment implements a number of DependencyProperties. The DependencyProperty is a special type of property that supports DataBinding. When bound to another CLR object, these properties try to find the INotifyPropertyChanged interface and subscribe to the PropertyChanged event. When the data is modified in the controls, the actual bound object gets modified automatically by the dependency property system, and vice versa. Similar to normal DependencyProperties, there is a Command property that allows you to call a method. Just like the normal property, Command implements the ICommand interface and has a return type Action that maps to Command. The RelayCommand here is an implementation of ICommand interfaces, which can be bound to the Command property of Button. There's more... Now let's talk about some other options, or possibly some pieces of general information that are relevant to this task. Using ApplicationBar on the app Just like any of the modern smartphones, Windows Phones also provides a standard way of communicating with any application. Each application can have a standard set of icons at the bottom of the application, which enable the user to perform some actions on the application. The ApplicationBar class is present at the bottom of any application across the operating system and hence, people tend to expect commands to be placed on ApplicationBar rather than on the application itself, as shown in the following screenshot. The ApplicationBar class accepts 72 pixels of height, which cannot be modified by code. When an application is open, the application bar is shown at the bottom of the screen. The preceding screenshot shows how the ApplicationBar class is laid out with two buttons, login and clear. Each ApplicationBar class can also associate a number of menu items for additional commands. The menu could be opened by clicking on the … button in the left-hand side of ApplicationBar. The page of Windows Phone allows you to define one application bar. There is a property called ApplicationBar on PhoneApplicationPage that lets you define the ApplicationBar class of that particular page, as shown in the following screenshot: <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> <shell:ApplicationBarIconButton Click="ApplicationBarIconButton_Click" Text="Login" IconUri="/Assets/next.png"/> <shell:ApplicationBarIconButton Click="ApplicationBarIconButtonSave_Click" Text="clear" IconUri="/Assets/delete.png"/> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Click="about_Click" Text="about" /> </shell:ApplicationBar.MenuItems> </shell:ApplicationBar> </phone:PhoneApplicationPage.ApplicationBar> In the preceding code, we defined two ApplicationBarIconButton classes. Each of them defines the Command items placed on the ApplicationBar class. The ApplicationBar.MenuItems method allows us to add menu items to the application. There can be a maximum of four application bar buttons and four menus per page. The ApplicationBar button also follows a special type of icon. There are a number of these icons added with the SDK, which could be used for the application. They can be found at DriveNameProgram FilesMicrosoft SDKs Windows Phonev8.0Icons. There are separate folders for both dark and light themes. It should be noted that ApplicationBar buttons do not allow command bindings. Tombstoning When dealing with Windows Phone applications, there are some special things to consider. When a user navigates out of the application, the application is transferred to a dormant state, where all the pages and state of the pages are still in memory but their execution is totally stopped. When the user navigates back to the application again, the state of the application is resumed and the application is again activated. Sometimes, it might also be possible that the app gets tombstoned after the user navigates away from the app. In this case, the app is not preserved in memory, but some information of the app is stored. Once the user comes back to the app, the application needs to be restored, and the application needs to resume in such a way that the user gets the same state as he or she left it. In the following figure, you can see the entire process: There are four states defined, the first one is the Not Running state where there is no existence of the process in memory. The Activated state is when the app is tapped by the user. When the user moves out of the app, it goes from Suspending to Suspended. It can be reactivated or it will be terminated after a certain time automatically. Let's look at the Login screen, where you might sometimes tombstone the login page while entering the user ID and password. To deal with storing the user state data before tombstoning, we use PhoneApplicationPage. The idea is to serialize the whole DataModel once the user navigates away from the page and retrieves the page state again when it navigates back. Let's annotate the UserId and Password of the LoginDataContext with DataMember and LoginDataContext with DataContract, as shown in the following code: [DataContract] public class LoginDataContext : PropertyBase { private string userid; [DataMember] public string UserId { get { return userid; } set { UserId = value; this.OnPropertyChanged("UserId"); } } private string password; [DataMember] public string Password { get { return password; } set { password = value; this.OnPropertyChanged("Password"); } } } The DataMember property will indicate that the properties are capable of serializing. As the user types into these properties, the properties get filled with data so that when the user navigates away, the model will always have the latest data present. In LoginPage, we define a property called _isNewPageInstance and set it to false, and in constructor, we set it to true. This will indicate that only when the page is instantiated, _isNewPageInstance is set to true. Now, when the user navigates away from the page, OnNavigatedFrom gets called. If the user navigates from the page, we save ViewModel into State as shown in the following code: protected override void OnNavigatedFrom(NavigationEventArgs e) { base.OnNavigatedFrom(e); if (e.NavigationMode != System.Windows.Navigation.NavigationMode.Back) { // Save the ViewModel variable in the page''s State dictionary. State[""ViewModel""] = logindataContext; } } Once DataModel is saved in the State object, it is persistent and can be retrieved later on when the application is resumed as follows: protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); if (_isNewPageInstance) { if (this.logindataContext == null) { if (State.Count > 0) { this.logindataContext = (LoginDataContext)State[""ViewModel""]; } else { this.logindataContext = new LoginDataContext(); } } DataContext = this.logindataContext; } _isNewPageInstance = false; } When the application is resumed from tombstoning, it calls OnNavigatedTo and retrieves DataModel back from the state. Summary In this article, we learned device application development with the Windows Phone environment. It provided us with simple solutions to some of the common problems when developing a Windows Phone application. Resources for Article: Further resources on this subject: Layout with Ext.NET [article] ASP.NET: Creating Rich Content [article] ASP.NET: Using jQuery UI Widgets [article]
Read more
  • 0
  • 0
  • 1559

article-image-using-socketio-and-express-together
Packt
23 Sep 2014
16 min read
Save for later

Using Socket.IO and Express together

Packt
23 Sep 2014
16 min read
In this article by Joshua Johanan, the author of the book Building Scalable Apps with Redis and Node.js, tells us that Express application is just the foundation. We are going to add features until it is a fully usable app. We currently can serve web pages and respond to HTTP, but now we want to add real-time communication. It's very fortunate that we just spent most of this article learning about Socket.IO; it does just that! Let's see how we are going to integrate Socket.IO with an Express application. (For more resources related to this topic, see here.) We are going to use Express and Socket.IO side by side. Socket.IO does not use HTTP like a web application. It is event based, not request based. This means that Socket.IO will not interfere with Express routes that we have set up, and that's a good thing. The bad thing is that we will not have access to all the middleware that we set up for Express in Socket.IO. There are some frameworks that combine these two, but it still has to convert the request from Express into something that Socket.IO can use. I am not trying to knock down these frameworks. They simplify a complex problem and most importantly, they do it well (Sails is a great example of this). Our app, though, is going to keep Socket.IO and Express separated as much as possible with the least number of dependencies. We know that Socket.IO does not need Express, as all our examples have not used Express in any way. This has an added benefit in that we can break off our Socket.IO module and run it as its own application at a future point in time. The other great benefit is that we learn how to do it ourselves. We need to go into the directory where our Express application is. Make sure that our pacakage.json has all the additional packages for this article and run npm.install. The first thing we need to do is add our configuration settings. Adding Socket.IO to the config We will use the same config file that we created for our Express app. Open up config.js and change the file to what I have done in the following code: var config = {port: 3000,secret: 'secret',redisPort: 6379,redisHost: 'localhost',routes: {   login: '/account/login',   logout: '/account/logout'}};module.exports = config; We are adding two new attributes, redisPort and redisHost. This is because of how the redis package configures its clients. We also are removing the redisUrl attribute. We can configure all our clients with just these two Redis config options. Next, create a directory under the root of our project named socket.io. Then, create a file called index.js. This will be where we initialize Socket.IO and wire up all our event listeners and emitters. We are just going to use one namespace for our application. If we were to add multiple namespaces, I would just add them as files underneath the socket.io directory. Open up app.js and change the following lines in it: //variable declarations at the topVar io = require('./socket.io');//after all the middleware and routesvar server = app.listen(config.port);io.startIo(server); We will define the startIo function shortly, but let's talk about our app.listen change. Previously, we had the app.listen execute, and we did not capture it in a variable; now we are. Socket.IO listens using Node's http.createServer. It does this automatically if you pass in a number into its listen function. When Express executes app.listen, it returns an instance of the HTTP server. We capture that, and now we can pass the http server to Socket.IO's listen function. Let's create that startIo function. Open up index.js present in the socket.io location and add the following lines of code to it: var io = require('socket.io');var config = require('../config');var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.on('connection', socketConnection);return io;}; We are exporting the startIo function that expects a server object that goes right into Socket.IO's listen function. This should start Socket.IO serving. Next, we get a reference to our namespace and listen on the connection event, sending a message event back to the client. We also are loading our configuration settings. Let's add some code to the layout and see whether our application has real-time communication. We will need the Socket.IO client library, so link to it from node_modules like you have been doing, and put it in our static directory under a newly created js directory. Open layout.ejs present in the packtchatviews location and add the following lines to it: <!-- put these right before the body end tag --><script type="text/javascript" src="/js/socket.io.js"></script><script>var socket = io.connect("http://localhost:3000/packtchat");socket.on('message', function(d){console.log(d);});</script> We just listen for a message event and log it to the console. Fire up the node and load your application, http://localhost:3000. Check to see whether you get a message in your console. You should see your message logged to the console, as seen in the following screenshot: Success! Our application now has real-time communication. We are not done though. We still have to wire up all the events for our app. Who are you? There is one glaring issue. How do we know who is making the requests? Express has middleware that parses the session to see if someone has logged in. Socket.IO does not even know about a session. Socket.IO lets anyone connect that knows the URL. We do not want anonymous connections that can listen to all our events and send events to the server. We only want authenticated users to be able to create a WebSocket. We need to get Socket.IO access to our sessions. Authorization in Socket.IO We haven't discussed it yet, but Socket.IO has middleware. Before the connection event gets fired, we can execute a function and either allow the connection or deny it. This is exactly what we need. Using the authorization handler Authorization can happen at two places, on the default namespace or on a named namespace connection. Both authorizations happen through the handshake. The function's signature is the same either way. It will pass in the socket server, which has some stuff we need such as the connection's headers, for example. For now, we will add a simple authorization function to see how it works with Socket.IO. Open up index.js, present at the packtchatsocket.io location, and add a new function that will sit next to the socketConnection function, as seen in the following code: var io = require('socket.io');var socketAuth = function socketAuth(socket, next){return next();return next(new Error('Nothing Defined'));};var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; I know that there are two returns in this function. We are going to comment one out, load the site, and then switch the lines that are commented out. The socket server that is passed in will have a reference to the handshake data that we will use shortly. The next function works just like it does in Express. If we execute it without anything, the middleware chain will continue. If it is executed with an error, it will stop the chain. Let's load up our site and test both by switching which return gets executed. We can allow or deny connections as we please now, but how do we know who is trying to connect? Cookies and sessions We will do it the same way Express does. We will look at the cookies that are passed and see if there is a session. If there is a session, then we will load it up and see what is in it. At this point, we should have the same knowledge about the Socket.IO connection that Express does about a request. The first thing we need to do is get a cookie parser. We will use a very aptly named package called cookie. This should already be installed if you updated your package.json and installed all the packages. Add a reference to this at the top of index.js present in the packtchatsocket.io location with all the other variable declarations: Var cookie = require('cookie'); And now we can parse our cookies. Socket.IO passes in the cookie with the socket object in our middleware. Here is how we parse it. Add the following code in the socketAuth function: var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie); At this point, we will have an object that has our connect.sid in it. Remember that this is a signed value. We cannot use it as it is right now to get the session ID. We will need to parse this signed cookie. This is where cookie-parser comes in. We will now create a reference to it, as follows: var cookieParser = require('cookie-parser'); We can now parse the signed connect.sid cookie to get our session ID. Add the following code right after our parsing code: var sid = cookieParser.signedCookie (parsedCookie['connect.sid'], config.secret); This will take the value from our parsedCookie and using our secret passphrase, will return the unsigned value. We will do a quick check to make sure this was a valid signed cookie by comparing the unsigned value to the original. We will do this in the following way: if (parsedCookie['connect.sid'] === sid)   return next(new Error('Not Authenticated')); This check will make sure we are only using valid signed session IDs. The following screenshot will show you the values of an example Socket.IO authorization with a cookie: Getting the session We now have a session ID so we can query Redis and get the session out. The default session store object of Express is extended by connect-redis. To use connect-redis, we use the same session package as we did with Express, express-session. The following code is used to create all this in index.js, present at packtchatsocket.io: //at the top with the other variable declarationsvar expressSession = require('express-session');var ConnectRedis = require('connect-redis')(expressSession);var redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort}); The final line is creating the object that will connect to Redis and get our session. This is the same command used with Express when setting the store option for the session. We can now get the session from Redis and see what's inside of it. What follows is the entire socketAuth function along with all our variable declarations: var io = require('socket.io'),connect = require('connect'),cookie = require('cookie'),expressSession = require('express-session'),ConnectRedis = require('connect-redis')(expressSession),redis = require('redis'),config = require('../config'),redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort});var socketAuth = function socketAuth(socket, next){var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie);var sid = connect.utils.parseSignedCookie(parsedCookie['connect.sid'], config.secret);if (parsedCookie['connect.sid'] === sid) return next(new Error('Not Authenticated'));redisSession.get(sid, function(err, session){   if (session.isAuthenticated)   {     socket.user = session.user;     socket.sid = sid;     return next();   }   else     return next(new Error('Not Authenticated'));});}; We can use redisSession and sid to get the session out of Redis and check its attributes. As far as our packages are concerned, we are just another Express app getting session data. Once we have the session data, we check the isAuthenticated attribute. If it's true, we know the user is logged in. If not, we do not let them connect yet. We are adding properties to the socket object to store information from the session. Later on, after a connection is made, we can get this information. As an example, we are going to change our socketConnection function to send the user object to the client. The following should be our socketConnection function: var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});socket.emit('message', socket.user);}; Now, let's load up our browser and go to http://localhost:3000. Log in and then check the browser's console. The following screenshot will show that the client is receiving the messages: Adding application-specific events The next thing to do is to build out all the real-time events that Socket.IO is going to listen for and respond to. We are just going to create the skeleton for each of these listeners. Open up index.js, present in packtchatsocket.io, and change the entire socketConnection function to the following code: var socketConnection = function socketConnection(socket){socket.on('GetMe', function(){});socket.on('GetUser', function(room){});socket.on('GetChat', function(data){});socket.on('AddChat', function(chat){});socket.on('GetRoom', function(){});socket.on('AddRoom', function(r){});socket.on('disconnect', function(){});}; Most of our emit events will happen in response to a listener. Using Redis as the store for Socket.IO The final thing we are going to add is to switch Socket.IO's internal store to Redis. By default, Socket.IO uses a memory store to save any data you attach to a socket. As we know now, we cannot have an application state that is stored only on one server. We need to store it in Redis. Therefore, we add it to index.js, present in packtchatsocket.io. Add the following code to the variable declarations: Var redisAdapter = require('socket.io-redis'); An application state is a flexible idea. We can store the application state locally. This is done when the state does not need to be shared. A simple example is keeping the path to a local temp file. When the data will be needed by multiple connections, then it must be put into a shared space. Anything with a user's session will need to be shared, for example. The next thing we need to do is add some code to our startIo function. The following code is what our startIo function should look like: exports.startIo = function startIo(server){io = io.listen(server);io.adapter(redisAdapter({host: config.redisHost, port: config.redisPort}));var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; The first thing is to start the server listening. Next, we will call io.set, which allows us to set configuration options. We create a new redisStore and set all the Redis attributes (redisPub, redisSub, and redisClient) to a new Redis client connection. The Redis client takes a port and the hostname. Socket.IO inner workings We are not going to completely dive into everything that Socket.IO does, but we will discuss a few topics. WebSockets This is what makes Socket.IO work. All web servers serve HTTP, that is, what makes them web servers. This works great when all you want to do is serve pages. These pages are served based on requests. The browser must ask for information before receiving it. If you want to have real-time connections, though, it is difficult and requires some workaround. HTTP was not designed to have the server initiate the request. This is where WebSockets come in. WebSockets allow the server and client to create a connection and keep it open. Inside of this connection, either side can send messages back and forth. This is what Socket.IO (technically, Engine.io) leverages to create real-time communication. Socket.IO even has fallbacks if you are using a browser that does not support WebSockets. The browsers that do support WebSockets at the time of writing include the latest versions of Chrome, Firefox, Safari, Safari on iOS, Opera, and IE 11. This means the browsers that do not support WebSockets are all the older versions of IE. Socket.IO will use different techniques to simulate a WebSocket connection. This involves creating an Ajax request and keeping the connection open for a long time. If data needs to be sent, it will send it in an Ajax request. Eventually, that request will close and the client will immediately create another request. Socket.IO even has an Adobe Flash implementation if you have to support really old browsers (IE 6, for example). It is not enabled by default. WebSockets also are a little different when scaling our application. Because each WebSocket creates a persistent connection, we may need more servers to handle Socket.IO traffic then regular HTTP. For example, when someone connects and chats for an hour, there will have only been one or two HTTP requests. In contrast, a WebSocket will have to be open for the entire hour. The way our code base is written, we can easily scale up more Socket.IO servers by themselves. Ideas to take away from this article The first takeaway is that for every emit, there needs to be an on. This is true whether the sender is the server or the client. It is always best to sit down and map out each event and which direction it is going. The next idea is that of note, which entails building our app out of loosely coupled modules. Our app.js kicks everything that deals with Express off. Then, it fires the startIo function. While it does pass over an object, we could easily create one and use that. Socket.IO just wants a basic HTTP server. In fact, you can just pass the port, which is what we used in our first couple of Socket.IO applications (Ping-Pong). If we wanted to create an application layer of Socket.IO servers, we could refactor this code out and have all the Socket.IO servers run on separate servers other than Express. Summary At this point, we should feel comfortable about using real-time events in Socket.IO. We should also know how to namespace our io server and create groups of users. We also learned how to authorize socket connections to only allow logged-in users to connect. Resources for Article: Further resources on this subject: Exploring streams [article] Working with Data Access and File Formats Using Node.js [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 18817
article-image-testing-your-recipes-and-getting-started-chefspec
Packt
23 Sep 2014
18 min read
Save for later

Testing Your Recipes and Getting Started with ChefSpec

Packt
23 Sep 2014
18 min read
This article by John Ewart, the author of Chef Essentials, introduces how to model your infrastructure, provision hosts in the cloud, and what goes into a cookbook. One important aspect of developing cookbooks is writing tests so that your recipes do not degrade over time or have bugs introduced into them in the future. This article introduces you to the following concepts: Understanding test methodologies How RSpec structures your tests Using ChefSpec to test recipes Running your tests Writing tests that cover multiple platforms (For more resources related to this topic, see here.) These techniques will prove to be very useful to write robust, maintainable cookbooks that you can use to confidently manage your infrastructure. Tests enable you to perform the following: Identify mistakes in your recipe logic Test your recipes against multiple platforms locally Develop recipes faster with local test execution before running them on a host Catch the changes in dependencies that will otherwise break your infrastructure before they get deployed Write tests for bugs to prevent them from happening again in the future (regression) Testing recipes There are a number of ways to test your recipes. One approach is to simply follow the process of developing your recipes, uploading them to your Chef server, and deploying them to a host; repeat this until you are satisfied. This has the benefit of executing your recipes on real instances, but the drawback is that it is slow, particularly if you are testing on multiple platforms, and requires that you maintain a fleet of hosts. If your cookbook run times are reasonably short and you have a small number of platforms to support them, then this might be a viable option. There is a better option to test your recipes, and it is called ChefSpec. For those who have used RSpec, a Ruby testing library, these examples will be a natural extension of RSpec. If you have never used RSpec, the beginning of this article will introduce you to RSpec's testing language and mechanisms. RSpec RSpec is a framework to test Ruby code that allows you to use a domain-specific language to provide tests, much in the same way Chef provides a domain-specific language to manipulate an infrastructure. Instead of using a DSL to manage systems, RSpec's DSL provides a number of components to express the expectations of code and simulate the execution of portions of the system (also known as mocking). The following examples in RSpec should give you a high-level idea of what RSpec can do: # simple expectationit 'should add 2 and 2 together' dox = 2 + 2expect(x).to eq 4end# Ensure any instance of Object receives a call to 'foo'# and return a pre-defined value (mocking)it 'verifies that an instance receives :foo' doexpect_any_instance_of(Object)   .to receive(:foo).and_return(:return_value)o = Object.new      expect(o.foo).to eq(:return_value)    end# Deep expectations (i.e client makes an HTTP call somewhere# inside it, make sure it happens as expected)it 'should make an authorized HTTP GET call' doexpect_any_instance_of(Net::HTTP::Get)   .to receive(:basic_auth)@client.make_http_callend RSpec and ChefSpec As with most testing libraries, RSpec enables you to construct a set of expectations, build objects and interact with them, and verify that the expectations have been met. For example, one expects that when a user logs in to the system, a database record is created, tracking their login history. However, to keep tests running quickly, the application should not make an actual database call; in place of the actual database call, a mock method should be used. Here, our mock method will catch the message in the database in order to verify that it was going to be sent; then, it will return an expected result so that the code does not know the database is not really there. Mock methods are methods that are used to replace one call with another; you can think of them as stunt doubles. For example, rather than making your code actually connect to the database, you might want to write a method that acts as though it has successfully connected to the database and fetched the expected data. This can be extended to model Chef's ability to handle multiple platforms and environments very nicely; code should be verified to behave as expected on multiple platforms without having to execute recipes on those platforms. This means that you can test the expectations about Red Hat recipes from an OS X development machine or Windows recipes from an Ubuntu desktop, without needing to have hosts around to deploy to for testing purposes. Additionally, the development cycle time is greatly reduced as tests can be executed much faster with expectations than when they are performing some work on an end host. You may be asking yourself, "How does this replace testing on an actual host?" The answer is that it may not, and so you should use integration testing to validate that the recipes work when deployed to real hosts. What it does allow you to do is validate your expectations of what resources are being executed, which attributes are being used, and that the logical flow of your recipes are behaving properly before you push your code to your hosts. This forms a tighter development cycle for rapid development of features while providing a longer, more formal loop to ensure that the code behaves correctly in the wild. If you are new to testing software, and in particular, testing Ruby code, this is a brief introduction to some of the concepts that we will cover. Testing can happen at many different levels of the software life cycle: Single-module level (called unit tests) Multi-module level (known as functional tests) System-level testing (also referred to as integration testing) Testing basics In the test-driven-development (TDD) philosophy, tests are written and executed early and often, typically, even before code is written. This guarantees that your code conforms to your expectations from the beginning and does not regress to a previous state of non-conformity. This article will not dive into the TDD philosophy and continuous testing, but it will provide you with enough knowledge to begin testing the recipes that you write and feel confident that they will do the correct thing when deployed into your production environment. Comparing RSpec with other testing libraries RSpec is designed to provide a more expressive testing language. This means that the syntax of an RSpec test (also referred to as a spec test or spec) is designed to create a language that feels more like a natural language, such as English. For example, using RSpec, one could write the following: expect(factorial(4)).to eq 24 If you read the preceding code, it will come out like expect factorial of 4 to equal 24. Compare this to a similar JUnit test (for Java): assertEquals(24, factorial(4)); If you read the preceding code, it would sound more like assert that the following are equal, 24 and factorial of 4. While this is readable by most programmers, it does not feel as natural as the one we saw earlier. RSpec also provides context and describe blocks that allow you to group related examples and shared expectations between examples in the group to help improve organization and readability. For example, consider the following spec test: describe Array do  it "should be empty when created" do       Array.new.should == []  endend Compare the preceding test to a similar NUnit (.NET testing framework) test: namespace MyTest {  using System.Collectionusing NUnit.Framework;  [TestFixture]  public class ArrayTest {       [Test]   public void NewArray() {     ArrayList list = new ArrayList();     Assert.AreEqual(0, list.size());   }}} Clearly, the spec test is much more concise and easier to read, which is a goal of RSpec. Using ChefSpec ChefSpec brings the expressiveness of RSpec to Chef cookbooks and recipes by providing Chef-specific primitives and mechanisms on top of RSpec's simple testing language. For example, ChefSpec allows you to say things like: it 'creates a file' do       expect(chef_run).to create_file('/tmp/myfile.txt')  end Here, chef_run is an instance of a fully planned Chef client execution on a designated end host, as we will see later. Also, in this case, it is expected that it will create a file, /tmp/myfile.txt, and the test will fail if the simulated run does not create such a file. Getting started with ChefSpec In order to get started with ChefSpec, create a new cookbook directory (here it is $HOME/cookbooks/mycookbook) along with a recipes and spec directory: mkdir -p ~/cookbooks/mycookbookmkdir -p ~/cookbooks/mycookbook/recipesmkdir -p ~/cookbooks/mycookbook/spec Now you will need a simple metadata.rb file inside your cookbook (here, this will be ~/cookbooks/mycookbook/metadata.rb): maintainer       "Your name here"maintainer_email "you@domain.com"license         "Apache"description     "Simple cookbook"long_description "Super simple cookbook"version         "1.0"supports         "debian" Once we have this, we now have the bare bones of a cookbook that we can begin to add recipes and tests to. Installing ChefSpec In order to get started with ChefSpec, you will need to install a gem that contains the ChefSpec libraries and all the supporting components. Not surprisingly, that gem is named chefspec and can be installed simply by running the following: gem install chefspec However, because Ruby gems often have a number of dependencies, the Ruby community has built a tool called Bundler to manage gem versions that need to be installed. Similar to how RVM provides interpreter-level version management and a way to keep your gems organized, Bundler provides gem-level version management. We will use Bundler for two reasons. In this case, we want to limit the number of differences between the versions of software you will be installing and the versions the author has installed to ensure that things are as similar as possible; secondly, this extends well to releasing production software—limiting the number of variables is critical to consistent and reliable behavior. Locking your dependencies in Ruby Bundler uses a file, specifically named Gemfile, to describe the gems that your project is dependent upon. This file is placed in the root of your project, and its contents inform Bundler which gems you are using, what versions to use, and where to find gems so that it can install them as needed. For example, here is the Gemfile that is being used to describe the gem versions that are used when writing these examples: source 'https://rubygems.org'gem 'chef',       '11.10.0'gem 'chefspec',   '3.2.0'gem 'colorize',   '0.6.0' Using this will ensure that the gems you install locally match the ones that are used when writing these examples. This should limit the differences between your local testing environments if you run these examples on your workstation. In order to use a Gemfile, you will need to have Bundler installed. If you are using RVM, Bundler should be installed with every gemset you create; if not, you will need to install it on your own via the following code: gem install bundler Once Bundler is installed and a Gemfile that contains the previous lines is placed in the root directory of your cookbook, you can execute bundle install from inside your cookbook's directory: user@host:~/cookbooks/mycookbook $> bundle install Bundler will parse the Gemfile in order to download and install the versions of the gems that are defined inside. Here, Bundler will install chefspec, chef, and colorize along with any dependencies those gems require that you do not already have installed. Creating a simple recipe and a matching ChefSpec test Once these dependencies are installed, you will want to create a spec test inside your cookbook and a matching recipe. In keeping with the TDD philosophy, we will first create a file, default_spec.rb, in the spec directory. The name of the spec file should match the name of the recipe file, only with the addition of _spec at the end. If you have a recipe file named default.rb (which most cookbooks will), the matching spec test would be contained in a file named default_spec.rb. Let's take a look at a very simple recipe and a matching ChefSpec test. Writing a ChefSpec test The test, shown as follows, will verify that our recipe will create a new file, /tmp/myfile.txt: require 'chefspec'describe 'mycookbook::default' dolet(:chef_run) {   ChefSpec::Runner.new.converge(described_recipe)}  it 'creates a file' do       expect(chef_run).to create_file('/tmp/myfile.txt')  endend Here, RSpec uses a describe block similar to the way Chef uses a resource block (again, blocks are identified by the do ... end syntax or code contained inside curly braces) to describe a resource, in this case, the default recipe inside of mycookbook. The described resource has a number of examples, and each example is described by an it block such as the following, which comes from the previous spec test: it 'creates a file' doexpect(chef_run).to create_file('/tmp/myfile.txt')  end The string given to the it block provides the example with a human-readable description of what the example is testing; in this case, we are expecting that the recipe creates a file. When our recipes are run through ChefSpec, the resources described are not actually created or modified. Instead, a model of what would happen is built as the recipes are executed. This means that ChefSpec can validate that an expected action would have occurred if the recipe were to be executed on an end host during a real client run. It's important to note that each example block resets expectations before it is executed, so any expectations defined inside of a given test will not fall through to other tests. Because most of the tests will involve simulating a Chef client run, we want to run the simulation every time. There are two options: execute the code in every example or use a shared resource that all the tests can take advantage of. In the first case, the test will look something like the following: it 'creates a file' dochef_run = ChefSpec::Runner.new.converge(described_recipe)  expect(chef_run).to create_file('/tmp/myfile.txt')  end The primary problem with this approach is remembering that every test will have to have the resource running at the beginning of the test. This translates to a large amount of duplicated code, and if the client needs to be configured differently, then the code needs to be changed for all the tests. To solve this problem, RSpec provides access to a shared resource through a built-in method, let. Using let allows a test to define a shared resource that is cached for each example and reset as needed for the following examples. This resource is then accessible inside of each block as a local variable, and RSpec takes care of knowing when to initialize it as needed. Our example test uses a let block to define the chef_run resource, which is described as a new ChefSpec runner for the described recipe, as shown in the following code: let(:chef_run) {ChefSpec::Runner.new.converge(described_recipe)}   Here, described_recipe is a ChefSpec shortcut for the name of the recipe provided in the describe block. Again, this is a DRY (don't repeat yourself) mechanism that allows us to rename the recipe and then only have to change the name of the description rather than hunt through the code. These techniques make tests better able to adapt to changes in names and resources, which reduces code rot as time goes by. Building our recipe The recipe, as defined here, is a very simple recipe whose only job is to create a simple file, /tmp/myfile.txt, on the end host: file "/tmp/myfile.txt" doowner "root"  group "root"  mode "0755"  action :createend Put this recipe into the recipes/default.rb file of your cookbook so that you have the following file layout: mycookbook/|- recipes/|     |- default.rb|- spec/       |- default_spec.rb Executing tests In order to run the tests, we use the rspec application. This is a Ruby script that comes with the RSpec gem, which will run the test scripts as spec tests using the RSpec language. It will also use the ChefSpec extensions because in our spec test, we have included them via the line require 'chefspec' at the top of our default_spec.rb file. Here, rspec is executed through Bundler to ensure that the desired gem versions, as specified in our Gemfile, are used at runtime without having to explicitly load them. This is done using the bundle exec command: bundle exec rspec spec/default_spec.rb This will run RSpec using Bundler and process the default_spec.rb file. As it runs, you will see the results of your tests, a . (period) for tests that pass, and an F for any tests that fail. Initially, the output from rspec will look like this: Finished in 0.17367 seconds1 example, 0 failures RSpec says that it completed the execution in 0.17 seconds and that you had one example with zero failures. However, the results would be quite different if we have a failed test; RSpec will tell us which test failed and why. Understanding failures RSpec is very good at telling you what went wrong with your tests; it doesn't do you any good to have failing tests if it's impossible to determine what went wrong. When an expectation in your test is not met, RSpec will tell you which expectation was unmet, what the expected value was, and what value was seen. In order to see what happens when a test fails, modify your recipe to ensure that the test fails. Look in your recipe for the following file resource: file "/tmp/myfile.txt" do Replace the file resource with a different filename, such as myfile2.txt, instead of myfile.txt, like the following example: file "/tmp/myfile2.txt" do Next, rerun your spec tests; you will see that the test is now failing because the simulated Chef client execution did something that was unexpected by our spec test. An example of this new execution would look like the following: [user@host]$ bundle exec rspec spec/default_spec.rbFFailures:1) my_cookbook::default creates a file     Failure/Error: expect(chef_run).to create_file('/tmp/myfile.txt')       expected "file[/tmp/myfile.txt]" with action :create to be in Chef run. Other file resources:         file[/tmp/myfile2.txt]     # ./spec/default_spec.rb:9:in `block (2 levels) in <top (required)>'Finished in 0.18071 seconds1 example, 1 failure Notice that instead of a dot, the test results in an F; this is because the test is now failing. As you can see from the previous output, RSpec is telling us the following: The creates a file example in the 'my_cookbook::default' test suite failed Our example failed in the ninth line of default_spec.rb (as indicated by the line that contained ./spec/default_spec.rb:9) The file resource /tmp/myfile.txt was expected to be operated on with the :create action The recipe interacted with a file resource /tmp/myfile2.txt instead of /tmp/myfile.txt RSpec will continue to execute all the tests in the files specified on the command line, printing out their status as to whether they passed or failed. If your tests are well written and run in isolation, then they will have no effect on one another; it should be safe to execute all of them even if some fail so that you can see what is no longer working. Summary RSpec with ChefSpec extensions provides us with incredibly powerful tools to test our cookbooks and recipes. In this article, you have seen how to develop basic ChefSpec tests for your recipes, organize your spec tests inside of your cookbook, execute and analyze the output of your spec tests, and simulate the execution of your recipes across multiple platforms. Resources for Article:  Further resources on this subject: Chef Infrastructure [article] Going Beyond the Basics [article] Setting Up a Development Environment [article]
Read more
  • 0
  • 0
  • 9670

article-image-galera-cluster-basics
Packt
23 Sep 2014
5 min read
Save for later

Galera Cluster Basics

Packt
23 Sep 2014
5 min read
This article written by Pierre MAVRO, the author of MariaDB High Performance, provides a brief introduction to Galera Cluster. (For more resources related to this topic, see here.) Galera Cluster is a synchronous multimaster solution created by Codership. It's a patch for MySQL and MariaDB with its own commands and configuration. On MariaDB, it has been officially promoted as the MariaDB Cluster. Galera Cluster provides certification-based replication. This means that each node certifies the replicated write set against other write sets. You don't have to worry about data integrity, as it manages it automatically and very well. Galera Cluster is a young product, but is ready for production. If you have already heard of MySQL Cluster, don't be confused; this is not the same thing at all. MySQL Cluster is a solution that has not been ported to MariaDB due to its complexity, code, and other reasons. MySQL Cluster provides availability and partitioning, while Galera Cluster provides consistency and availability. Galera Cluster is a simple yet powerful solution. How Galera Cluster works The following are some advantages of Galera Cluster: True multimaster: It can read and write to any node at any time Synchronous replication: There is no slave lag and no data is lost at node crash Consistent data: All nodes have the same state (same data exists between nodes at a point in time) Multithreaded slave: This enables better performance with any workload No need of an HA Cluster for management: There are no master-slave failover operations (such as Pacemaker, PCR, and so on) Hot standby: There is no downtime during failover Transparent to applications: No specific drivers or application changes are required No read and write splitting needed: There is no need to split the read and write requests WAN: Galera Cluster supports WAN replication Galera Cluster needs at least three nodes to work properly (because of the notion of quorum, election, and so on). You can also work with a two-node cluster, but you will need an arbiter (hence three nodes). The arbiter could be used on another machine available in the same LAN of your Galera Cluster, if possible. The multimaster replication has been designed for InnoDB/XtraDB. It doesn't mean you can't perform a replication with other storage engines! If you want to use other storage engines, you will be limited by the following: They can only write on a single node at a time to maintain consistency. Replication with other nodes may not be fully supported. Conflict management won't be supported. Applications that connect to Galera will only be able to write on a single node (IP/DNS) at the same time. As you can see in the preceding diagram, HTTP and App servers speak directly to their respective DBMS servers without wondering which node of the Galera Cluster they are targeting. Usually, without Galera Cluster, you can use a cluster software such as Pacemaker/Corosync to get a VIP on a master node that can switch over in case a problem occurs. No need to get PCR in that case; a simple VIP with a custom script will be sufficient to check whether the server is in sync with others is enough. Galera Cluster uses the following advanced mechanisms for replication: Transaction reordering: Transactions are reordered before commitment to other nodes. This increases the number of successful transaction certification pass tests. Write sets: This reduces the number of operations between nodes by writing sets in a single write set to avoid too much node coordination. Database state machine: Read-only transactions are processed on the local node. Write transactions are executed locally on shadow copies and then broadcasted as a read set to the other nodes for certification and commit. Group communication: High-level abstraction for communication between nodes to guarantee consistency (gcomm or spread). To get consistency and similar IDs between nodes, Galera Cluster uses GTID, similar to MariaDB 10 replication. However, it doesn't use the MariaDB GTID replication mechanism at all, as it has its own implementation for its own usage. Galera Cluster limitations Galera Cluster has limitations that prevent it from working correctly. Do not go live in production if you haven't checked that your application is in compliance with the limitations listed. The following are the limitations: Galera Cluster only fully supports InnoDB tables. TokuDB is planned but not yet available and MyISAM is partially supported. Galera Cluster uses primary keys on all your tables (mandatory) to avoid different query execution orders between all your nodes. If you do not do it on your own, Galera will create one. The delete operation is not supported on the tables without primary keys. Locking/unlocking tables and lock functions are not supported. They will be ignored if you try to use them. Galera Cluster disables query cache. XA transactions (global transactions) are not supported. Query logs can't be directed to a table, but can be directed to a file instead. Other less common limitations exist (please refer to the full list if you want to get them all: http://galeracluster.com/documentation-webpages/limitations.html) but in most cases, you shouldn't be annoyed with those ones. Summary This article introduced the benefits and drawbacks of Galera Cluster. It also discussed the features of Galera Cluster that makes it a good solution for write replications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [Article] Using SHOW EXPLAIN with running queries [Article] Installing MariaDB on Windows and Mac OS X [Article]
Read more
  • 0
  • 0
  • 6759

article-image-animations-cocos2d-x
Packt
23 Sep 2014
24 min read
Save for later

Animations in Cocos2d-x

Packt
23 Sep 2014
24 min read
In this article, created by Siddharth Shekhar, the author of Learning Cocos2d-x Game Development, we will learn different tools that can be used to animate the character. Then, using these animations, we will create a simple state machine that will automatically check whether the hero is falling or is being boosted up into the air, and depending on the state, the character will be animated accordingly. We will cover the following in this article: Animation basics TexturePacker Creating spritesheet for the player Creating and coding the enemy animation Creating the skeletal animation Coding the player walk cycle (For more resources related to this topic, see here.) Animation basics First of all, let's understand what animation is. An animation is made up of different images that are played in a certain order and at a certain speed, for example, movies that run images at 30 fps or 24 fps, depending on which format it is in, NTSC or PAL. When you pause a movie, you are actually seeing an individual image of that movie, and if you play the movie in slow motion, you will see the frames or images that make up to create the full movie. In games while making animations, we will do the same thing: adding frames and running them at a certain speed. We will control the images to play in a particular sequence and interval by code. For an animation to be "smooth", you should have at least 24 images or frames being played in a second, which is known as frames per second (FPS). Each of the images in the animation is called a frame. Let's take the example of a simple walk cycle. Each walk cycle should be of 24 frames. You might say that it is a lot of work, and for sure it is, but the good news is that these 24 frames can be broken down into keyframes, which are important images that give the illusion of the character walking. The more frames you add between these keyframes, the smoother the animation will be. The keyframes for a walk cycle are Contact, Down, Pass, and Up positions. For mobile games, as we would like to get away with as minimal work as possible, instead of having all the 24 frames, some games use just the 4 keyframes to create a walk animation and then speed up the animation so that player is not able to see the missing frames. So overall, if you are making a walk cycle for your character, you will create eight images or four frames for each side. For a stylized walk cycle, you can even get away with a lesser number of frames. For the animation in the game, we will create images that we will cycle through to create two sets of animation: an idle animation, which will be played when the player is moving down, and a boost animation, which will get played when the player is boosted up into the air. Creating animation in games is done using two methods. The most popular form of animation is called spritesheet animation and the other is called skeletal animation. Spritesheet animation Spritesheet animation is when you keep all the frames of the animation in a single file accompanied by a data file that will have the name and location of each of the frames. This is very similar to the BitmapFont. The following is the spritesheet we will be using in the game. For the boost and idle animations, each of the frames for the corresponding animation will be stored in an array and made to loop at a particular predefined speed. The top four images are the frames for the boost animation. Whenever the player taps on the screen, the animation will cycle through these four images appearing as if the player is boosted up because of the jetpack. The bottom four images are for the idle animation when the player is dropping down due to gravity. In this animation, the character will look as if she is blinking and the flames from the jetpack are reduced and made to look as if they are swaying in the wind. Skeletal animation Skeletal animation is relatively new and is used in games such as Rayman Origins that have loads and loads of animations. This is a more powerful way of making animations for 2D games as it gives a lot of flexibility to the developer to create animations that are fast to produce and test. In the case of spritesheet animations, if you had to change a single frame of the animation, the whole spritesheet would have to be recreated causing delay; imagine having to rework 3000 frames of animations in your game. If each frame was hand painted, it would take a lot of time to produce the individual images causing delay in production time, not to mention the effort and time in redrawing images. The other problem is device memory. If you are making a game for the PC, it would be fine, but in the case of mobiles where memory is limited, spritesheet animation is not a viable option unless cuts are made to the design of the game. So, how does skeletal animation work? In the case of skeletal animation, each item to be animated is stored in a separate spritesheet along with the data file for the locations of the individual images for each body part and object to be animated, and another data file is generated that positions and rotates the individual items for each of the frames of the animation. To make this clearer, look at the spritesheet for the same character created with skeletal animation: Here, each part of the body and object to be animated is a separate image, unlike the method used in spritesheet animation where, for each frame of animation, the whole character is redrawn. TexturePacker To create a spritesheet animation, you will have to initially create individual frames in Photoshop, Illustrator, GIMP or any other image editing software. I have already made it and have each of the images for the individual frames ready. Next, you will have to use a software to create spritesheets from images. TexturePacker is a very popular software that is used by industry professionals to create spritesheets. You can download it from https://www.codeandweb.com/. These are the same guys who made PhysicsEditor, which we used to make shapes for Box2D. You can use the trial version of this software. While downloading, choose the version that is compatible with your operating system. Fortunately, TexturePacker is available for all the major operating systems, including Linux. Refer to the following screenshot to check out the steps to use TexturePacker: Once you have downloaded TexturePacker, you have three options: you can click to try the full version for a week, or you can purchase the license, or click on the essential version to use in the trial version. In the trial version, some of the professional features are disabled, so I recommend trying the professional features for a week. Once you click the option, you should see the following interface: Texture packer has three panels; let's start from the right. The right-hand side panel will display the names of all the images that you select to create the spritesheet. The center panel is a preview window that shows how the images are packed. The left-hand side panel gives you options to store the packed texture and data file to be published to and decide the maximum size of the packed image. The Layout section gives a lot of flexibility to set up the individual images in TexturePacker, and then you have the advanced section. Let's look at some of the key items on the panel on the left. The display section The display section consists of the following options: Data Format: As we saw earlier, each exported file creates a spritesheet that has a collection of images and a data file that keeps track of the positions on the spritesheet. The data format usually changes depending upon the framework or engine. In TexturePacker, you can select the framework that you are using to develop the game, and TexturePacker will create a data file format that is compatible with the framework. If you look at the drop-down menu, you can see a lot of popular frameworks and engines in the list such as 2DToolkit, OGRE, Cocos2d, Corona SDK, LibGDX, Moai, Sparrow/Starling, SpriteKit, and Unity. You can also create a regular JSON file too if you wish. Java Script Object Notification (JSON) is similar to an XML file that is used to store and retrieve data. It is a collection of names and value pairs used for data interchanging. Data file: This is the location where you want the exported file to be placed. Texture format: Usually, this is set to .png, but you can select the one that is most convenient. Apart from PNG, you also have PVR, which is used so that people cannot view the image readily and also provides image compression. Png OPT file: This is used to set the quality of PNG images. Image format: This sets the RGB format to be used; usually, you would want this to be set at the default value. AutoSD: If you are going to create images for different resolutions, this option allows you to create resources depending on the different resolutions you are developing the game for, without the need for going into the graphics software, shrinking the images and packing them again for all the resolutions. Content protection: This protects the image and data file with an encryption key so that people can't steal spritesheets from the game file. The Geometry section The Geometry section consists of the following options: Max size: You can specify the maximum width and height of the spritesheet depending upon the framework. Usually, all frameworks allow up to 4092 x 4092, but it mostly depends on the device. Fixed size: Apparently, if you want a fixed size, you will go with this option. Size constraint: Some frameworks prefer the spritesheets to be in the power of 2 (POT), for example, 32x32, 64x64, 256x256, and so on. If this is the case, you need to select the size accordingly. For Cocos2d, you can choose any size. Scale: This is used to scale up or scale down the image. The Layout section The Layout section consists of the following options: Algorithm: This is the algorithm that will be used to make sure that the images you select to create the spritesheet are packed in the most efficient way. If you are using the pro version, choose MaxRects, but if you are using the essential version, you will have to choose Basic. Border Padding / Shape Padding: Border padding packs the gap between the border of the spritesheet and the image that it is surrounding. Shape padding is the padding between the individual images of the spritesheets. If you find that the images are getting overlapped while playing the animation in the game, you might want to increase the values to avoid overlapping. Trim: This removes the extra alpha that is surrounding the image, which would unnecessarily increase the image size of the spritesheet. Advanced features The following are some miscellaneous options in TexturePacker: Texture path: This appends the path of the texture file at the beginning of the texture name Clean transparent pixels: This sets the transparent pixels color to #000 Trim sprite names: This will remove the extension from the names of the sprites (.png and .jpg), so while calling for the name of the frame, you will not have to use extensions Creating a spritesheet for the player Now that we understand the different items in the TextureSettings panel of TexturePacker, let's create our spritesheet for the player animation from individual frames provided in the Resources folder. Open up the folder in the system and select all the images for the player that contains the idle and boost frames. There will be four images for each of the animation. Select all eight images and click-and-drag all the images to the Sprites panel, which is the right-most panel of TexturePacker. Once you have all the images on the Sprites panel, the preview panel at the center will show a preview of the spritesheet that will be created: Now on the TextureSettings panel, for the Data format option, select cocos2d. Then, in the Data file option, click on the folder icon on the right and select the location where you would like to place the data file and give the name as player_anim. Once selected, you will see that the Texture file location also auto populates with the same location. The data file will have a format of .plist and the texture file will have an extension of .png. The .plist format creates data in a markup language similar to XML. Although it is more common on Mac, you can use this data type independent of the platform you use while developing the game using Cocos2d-x. Keep the rest of the settings the same. Save the file by clicking on the save icon on the top to a location where the data and spritesheet files are saved. This way, you can access them easily the next time if you want to make the same modifications to the spritesheet. Now, click on the Publish button and you will see two files, player_anim.plist and player_anim.png, in the location you specified in the Data file and Location file options. Copy and paste these two files in the Resources folder of the project so that we can use these files to create the player states. Creating and coding enemy animation Now, let's create a similar spritesheet and data file for the enemy also. All the required files for the enemy frames are provided in the Resources folder. So, once you create the spritesheet for the enemy, it should look something like the following screenshot. Don't worry if the images are shown in the wrong sequence, just make sure that the files are numbered correctly from 1 to 4 and it is in the sequence the animations needs to be played in. Now, place the enemy_anim.png spritesheet and data file in the Resources folder in the directory and add the following lines of code in the Enemy.cpp file to animate the enemy:   //enemy animation       CCSpriteBatchNode* spritebatch = CCSpriteBatchNode::create("enemy_anim.png");    CCSpriteFrameCache* cache = CCSpriteFrameCache::sharedSpriteFrameCache();    cache->addSpriteFramesWithFile("enemy_anim.plist");       this->createWithSpriteFrameName("enemy_idle_1.png");    this->addChild(spritebatch);             //idle animation    CCArray* animFrames = CCArray::createWithCapacity(4);      char str1[100] = {0};    for(int i = 1; i <= 4; i++)    {        sprintf(str1, "enemy_idle_%d.png", i);        CCSpriteFrame* frame = cache->spriteFrameByName( str1 );        animFrames->addObject(frame);    }           CCAnimation* idleanimation = CCAnimation::createWithSpriteFrames(animFrames, 0.25f);    this->runAction (CCRepeatForever::create(CCAnimate::create(idleanimation))) ; This is very similar to the code for the player. The only difference is that for the enemy, instead of calling the function on the hero, we call it to the same class. So, now if you build and run the game, you should see the enemy being animated. The following is the screenshot from the updated code. You can now see the flames from the booster engine of the enemy. Sadly, he doesn't have a boost animation but his feet swing in the air. Now that we have mastered the spritesheet animation technique, let's see how to create a simple animation using the skeletal animation technique. Creating the skeletal animation Using this technique, we will create a very simple player walk cycle. For this, there is a software called Spine by Esoteric Software, which is a very widely used professional software to create skeletal animations for 2D games. The software can be downloaded from the company's website at http://esotericsoftware.com/spine-purchase: There are three versions of the software available: the trial, essential, and professional versions. Although majority of the features of the professional version are available in the essential version, it doesn't have ghosting, meshes, free-form deformation, skinning, and IK pinning, which is in beta stage. The inclusion of these features does speed up the animation process and certainly takes out a lot of manual work for the animator or illustrator. To learn more about these features, visit the website and hover the mouse over these features to have a better understanding of what they do. You can follow along by downloading the trial version, which can be done by clicking the Download trial link on the website. Spine is available for all platforms including Windows, Mac, and Linux. So download it for the OS of your choice. On Mac, after downloading and running the software, it will ask to install X11, or you can download and install it from http://xquartz.macosforge.org/landing/. After downloading and installing the plugin, you can open Spine. Once the software is up and running, you should see the following window: Now, create a new project by clicking on the spine icon on the top left. As we can see in the screenshot, we are now in the SETUP mode where we set up the character. On the Tree panel on the right-hand side, in the Hierarchy pane, select the Images folder. After selecting the folder, you will be able to select the path where the individual files are located for the player. Navigate to the player_skeletal_anim folder where all the images are present. Once selected, you will see the panel populate with the images that are present in the folder, namely the following: bookGame_player_Lleg bookGame_player_Rleg bookGame_player_bazooka bookGame_player_body bookGame_player_hand bookGame_player_head Now drag-and-drop all the files from the Images folder onto the scene. Don't worry if the images are not in the right order. In the Draw Order dropdown in the Hierarchy panel, you can move around the different items by drag-and-drop to make them draw in the order that you want them to be displayed. Once reordered, move the individual images on the screen to the appropriate positions: You can move around the images by clicking on the translate button on the bottom of the screen. If you hover over the buttons, you can see the names of the buttons. We will now start creating the bones that we will use to animate the character. In the panel on the bottom of the Tools section, click on the Create button. You should now see the cursor change to the bone creation icon. Before you create a bone, you have to always select the bone that will be the parent. In this case, we select the root bone that is in the center of the character. Click on it and drag downwards and hold the Shift key at the same time. Click-and-drag downwards up to the end of the blue dress of the character; make sure that the blue dress is highlighted. Now release the mouse button. The end point of this bone will be used as the hip joint from where the leg bones will be created for the character. Now select the end of the newly created bone, which you made in the last step, and click-and-drag downwards again holding Shift at the same time to make a bone that goes all the way to the end of the leg. With the leg still getting highlighted, release the mouse button. To create the bone for the other leg, create a new bone again starting from end of the first bone and the hip joint, and while the other leg is selected, release the mouse button to create a bone for the leg. Now, we will create a bone for the hand. Select the root node, the node in the middle of the character while holding Shift again, and draw a bone to the hand while the hand is highlighted. Create a bone for the head by again selecting the root node selected earlier. Draw a bone from the root node to the head while holding Shift and release the mouse button once you are near the ear of the character and the head is highlighted. You will notice that we never created a bone for the bazooka. For the bazooka, we will make the hand as the parent bone so that when the hand gets rotated, the bazooka also rotates along. Click on the bazooka node on the Hierarchy panel (not the image) and drag it to the hand node in the skeleton list. You can rotate each of the bones to check whether it is rotating properly. If not, you can move either the bones or images around by locking either one of them in its place so that you can move or rotate the other freely by clicking either the bones or the images button in the compensate button at the bottom of the screen. The following is the screenshot that shows my setup. You can use it to follow and create the bones to get a more satisfying animation. To animate the character, click on the SETUP button on the top and the layout will change to ANIMATE. You will see that a new timeline has appeared at the bottom. Click on the Animations tab in Hierarchy and rename the animation name from animation to runCycle by double-clicking on it. We will use the timeline to animate the character. Click on the Dopesheet icon at the bottom. This will show all the keyframes that we have made for the animation. As we have not created any, the dopesheet is empty. To create our first keyframe, we will click on the legs and rotate both the bones so that it reflects the contact pose of the walk cycle. Now to set a keyframe, click on the orange-colored key icon next to Rotate in the Transform panel at the bottom of the screen. Click on the translate key, as we will be changing the translation as well later. Once you click on it, the dopesheet will show the bones that you just rotated and also show what changes you made to the bone. Here, we rotated the bone, so you will see Rotation under the bones, and as we clicked on the translate key, it will show the Translate also. Now, frame 24 is the same as frame 0. So, to create the keyframe at frame 24, drag the timeline scrubber to frame 24 and click on the rotate and translate keys again. To set the keyframe at the middle where the contact pose happens but with opposite legs, rotate the legs to where the opposite leg was and select the keys to create a keyframe. For frames 6 and 18, we will keep the walk cycle very simple, so just raise the character above by selecting the root node, move it up in the y direction and click the orange key next to the translate button in the Transform panel at the bottom. Remember that you have to click it once in frame 6 and then move the timeline scrubber to frame 18, move the character up again, and click on the key again to create keyframes for both frames 6 and 18. Now the dopesheet should look as follow: Now to play the animation in a loop, click on the Repeat Animation button to the right of the Play button and then on the Play button. You will see the simple walk animation we created for the character. Next, we will export the data required to create the animation in Cocos2d-x. First, we will export the data for the animation. Click on the Spine button on top and select Export. The following window should pop up. Select JSON and choose the directory in which you would want to save the file to and click on Export: That is not all; we have to create a spritesheet and data file just as we created one in texture packer. There is an inbuilt tool in Spine to create a packed spritesheet. Again, click on the Spine icon and this time select Texture Packer. Here, in the input directory, select the Images folder from where we imported all the images initially. For the output directory, select the location to where the PNG and data files should be saved to. If you click on the settings button, you will see that it looks very similar to what we saw in TexturePacker. Keep the default values as they are. Click on Pack and give the name as player. This will create the .png and .atlas files, which are the spritesheet and data file, respectively: You have three files instead of the two in TexturePacker. There are two data files and an image file. While exporting the JSON file, if you didn't give it a name, you can rename the file manually to player.json just for consistency. Drag the player.atlas, player.json, and player.png files into the project folder. Finally, we come to the fun part where we actually use the data files to animate the character. For testing, we will add the animations to the HelloWorldScene.cpp file and check the result. Later, when we add the main menu, we will move it there so that it shows as soon as the game is launched. Coding the player walk cycle If you want to test the animations in the current project itself, add the following to the HelloWorldScene.h file first: #include <spine/spine-cocos2dx.h> Include the spine header file and create a variable named skeletonNode of the CCSkeletalAnimation type: extension::CCSkeletonAnimation* skeletonNode; Next, we initialize the skeletonNode variable in the HelloWorldScene.cpp file:    skeletonNode = extension::CCSkeletonAnimation::createWithFile("player.json", "player.atlas", 1.0f);    skeletonNode->addAnimation("runCycle",true,0,0);    skeletonNode->setPosition(ccp(visibleSize.width/2 , skeletonNode->getContentSize().height/2));    addChild(skeletonNode); Here, we give the two data files into the createWithFile() function of CCSkeletonAnimation. Then, we initiate it with addAnimation and give it the animation name we gave when we created the animation in Spine, which is runCycle. We next set the position of the skeletonNode; we set it right above the bottom of the screen. Next, we add the skeletonNode to the display list. Now, if you build and run the project, you will see the player getting animated forever in a loop at the bottom of the screen: On the left, we have the animation we created using TexturePacker from CodeAndWeb, and in the middle, we have the animation that was created using Spine from Esoteric Software. Both techniques have their set of advantages, and it also depends upon the type and scale of the game that you are making. Depending on this, you can choose the tool that is more tuned to your needs. If you have a smaller number of animations in your game and if you have good artists, you could use regular spritesheet animations. If you have a lot of animations or don't have good animators in your team, Spine makes the animation process a lot less cumbersome. Either way, both tools in professional hands can create very good animations that will give life to the characters in the game and therefore give a lot of character to the game itself. Summary This article took a very brief look at animations and how to create an animated character in the game using the two of the most popular animation techniques used in games. We also looked at FSM and at how we can create a simple state machine between two states and make the animation change according to the state of the player at that moment. Resources for Article: Further resources on this subject: Moving the Space Pod Using Touch [Article] Sprites [Article] Cocos2d-x: Installation [Article]
Read more
  • 0
  • 0
  • 11395
article-image-handling-long-running-requests-play
Packt
22 Sep 2014
18 min read
Save for later

Handling Long-running Requests in Play

Packt
22 Sep 2014
18 min read
In this article by Julien Richard-Foy, author of Play Framework Essentials, we will dive in the framework internals and explain how to leverage its reactive programming model to manipulate data streams. (For more resources related to this topic, see here.) Firstly, I would like to mention that the code called by controllers must be thread-safe. We also noticed that the result of calling an action has type Future[Result] rather than just Result. This article explains these subtleties and gives answers to questions such as "How are concurrent requests processed by Play applications?" More precisely, this article presents the challenges of stream processing and the way the Play framework solves them. You will learn how to consume, produce, and transform data streams in a non-blocking way using the Iteratee library. Then, you will leverage these skills to stream results and push real-time notifications to your clients. By the end of the article, you will be able to do the following: Produce, consume, and transform streams of data Process a large request body chunk by chunk Serve HTTP chunked responses Push real-time notifications using WebSockets or server-sent events Manage the execution context of your code Play application's execution model The streaming programming model provided by Play has been influenced by the execution model of Play applications, which itself has been influenced by the nature of the work a web application performs. So, let's start from the beginning: what does a web application do? For now, our example application does the following: the HTTP layer invokes some business logic via the service layer, and the service layer does some computations by itself and also calls the database layer. It is worth noting that in our configuration, the database system runs on the same machine as the web application but this is, however, not a requirement. In fact, there are chances that in real-world projects, your database system is decoupled from your HTTP layer and that both run on different machines. It means that while a query is executed on the database, the web layer does nothing but wait for the response. Actually, the HTTP layer is often waiting for some response coming from another system; it could, for example, retrieve some data from an external web service, or the business layer itself could be located on a remote machine. Decoupling the HTTP layer from the business layer or the persistence layer gives a finer control on how to scale the system (more details about that are given further in this article). Anyway, the point is that the HTTP layer may essentially spend time waiting. With that in mind, consider the following diagram showing how concurrent requests could be executed by a web application using a threaded execution model. That is, a model where each request is processed in its own thread.  Threaded execution model Several clients (shown on the left-hand side in the preceding diagram) perform queries that are processed by the application's controller. On the right-hand side of the controller, the figure shows an execution thread corresponding to each action's execution. The filled rectangles represent the time spent performing computations within a thread (for example, for processing data or computing a result), and the lines represent the time waiting for some remote data. Each action's execution is distinguished by a particular color. In this fictive example, the action handling the first request may execute a query to a remote database, hence the line (illustrating that the thread waits for the database result) between the two pink rectangles (illustrating that the action performs some computation before querying the database and after getting the database result). The action handling the third request may perform a call to a distant web service and then a second one, after the response of the first one has been received; hence, the two lines between the green rectangles. And the action handling the last request may perform a call to a distant web service that streams a response of an infinite size, hence, the multiple lines between the purple rectangles. The problem with this execution model is that each request requires the creation of a new thread. Threads have an overhead at creation, because they consume memory (essentially because each thread has its own stack), and during execution, when the scheduler switches contexts. However, we can see that these threads spend a lot of time just waiting. If we could use the same thread to process another request while the current action is waiting for something, we could avoid the creation of threads, and thus save resources. This is exactly what the execution model used by Play—the evented execution model—does, as depicted in the following diagram: Evented execution model Here, the computation fragments are executed on two threads only. Note that the same action can have its computation fragments run by different threads (for example, the pink action). Also note that several threads are still in use, that's why the code must be thread-safe. The time spent waiting between computing things is the same as before, and you can see that the time required to completely process a request is about the same as with the threaded model (for instance, the second pink rectangle ends at the same position as in the earlier figure, same for the third green rectangle, and so on). A comparison between the threaded and evented models can be found in the master's thesis of Benjamin Erb, Concurrent Programming for Scalable Web Architectures, 2012. An online version is available at http://berb.github.io/diploma-thesis/. An attentive reader may think that I have cheated; the rectangles in the second figure are often thinner than their equivalent in the first figure. That's because, in the first model, there is an overhead for scheduling threads and, above all, even if you have a lot of threads, your machine still has a limited number of cores effectively executing the code of your threads. More precisely, if you have more threads than your number of cores, you necessarily have threads in an idle state (that is, waiting). This means, if we suppose that the machine executing the application has only two cores, in the first figure, there is even time spent waiting in the rectangles! Scaling up your server The previous section raises the question of how to handle a higher number of concurrent requests, as depicted in the following diagram: A server under an increasing load The previous section explained how to avoid wasting resources to leverage the computing power of your server. But actually, there is no magic; if you want to compute even more things per unit of time, you need more computing power, as depicted in the following diagram: Scaling using more powerful hardware One solution could be to have a more powerful server. But you could be smarter than that and avoid buying expensive hardware by studying the shape of the workload and make appropriate decisions at the software-level. Indeed, there are chances that your workload varies a lot over time, with peaks and holes of activity. This information suggests that if you wanted to buy more powerful hardware, its performance characteristics would be drawn by your highest activity peak, even if it occurs very occasionally. Obviously, this solution is not optimal because you would buy expensive hardware even if you actually needed it only one percent of the time (and more powerful hardware often also means more power-consuming hardware). A better way to handle the workload elasticity consists of adding or removing server instances according to the activity level, as depicted in the following diagram: Scaling using several server instances This architecture design allows you to finely (and dynamically) tune your server capacity according to your workload. That's actually the cloud computing model. Nevertheless, this architecture has a major implication on your code; you cannot assume that subsequent requests issued by the same client will be handled by the same server instance. In practice, it means that you must treat each request independently of each other; you cannot for instance, store a counter on a server instance to count the number of requests issued by a client (your server would miss some requests if one is routed to another server instance). In a nutshell, your server has to be stateless. Fortunately, Play is stateless, so as long as you don't explicitly have a mutable state in your code, your application is stateless. Note that the first implementation I gave of the shop was not stateless; indeed the state of the application was stored in the server's memory. Embracing non-blocking APIs In the first section of this article, I claimed the superiority of the evented execution model over the threaded execution model, in the context of web servers. That being said, to be fair, the threaded model has an advantage over the evented model: it is simpler to program with. Indeed, in such a case, the framework is responsible for creating the threads and the JVM is responsible for scheduling the threads, so that you don't even have to think about this at all, yet your code is concurrently executed. On the other hand, with the evented model, concurrency control is explicit and you should care about it. Indeed, the fact that the same execution thread is used to run several concurrent actions has an important implication on your code: it should not block the thread. Indeed, while the code of an action is executed, no other action code can be concurrently executed on the same thread. What does blocking mean? It means holding a thread for too long a duration. It typically happens when you perform a heavy computation or wait for a remote response. However, we saw that these cases, especially waiting for remote responses, are very common in web servers, so how should you handle them? You have to wait in a non-blocking way or implement your heavy computations as incremental computations. In all the cases, you have to break down your code into computation fragments, where the execution is managed by the execution context. In the diagram illustrating the evented execution model, computation fragments are materialized by the rectangles. You can see that rectangles of different colors are interleaved; you can find rectangles of another color between two rectangles of the same color. However, by default, the code you write forms a single block of execution instead of several computation fragments. It means that, by default, your code is executed sequentially; the rectangles are not interleaved! This is depicted in the following diagram: Evented execution model running blocking code The previous figure still shows both the execution threads. The second one handles the blue action and then the purple infinite action, so that all the other actions can only be handled by the first execution context. This figure illustrates the fact that while the evented model can potentially be more efficient than the threaded model, it can also have negative consequences on the performances of your application: infinite actions block an execution thread forever and the sequential execution of actions can lead to much longer response times. So, how can you break down your code into blocks that can be managed by an execution context? In Scala, you can do so by wrapping your code in a Future block: Future { // This is a computation fragment} The Future API comes from the standard Scala library. For Java users, Play provides a convenient wrapper named play.libs.F.Promise: Promise.promise(() -> {// This is a computation fragment}); Such a block is a value of type Future[A] or, in Java, Promise<A> (where A is the type of the value computed by the block). We say that these blocks are asynchronous because they break the execution flow; you have no guarantee that the block will be sequentially executed before the following statement. When the block is effectively evaluated depends on the execution context implementation that manages it. The role of an execution context is to schedule the execution of computation fragments. In the figure showing the evented model, the execution context consists of a thread pool containing two threads (represented by the two lines under the rectangles). Actually, each time you create an asynchronous value, you have to supply the execution context that will manage its evaluation. In Scala, this is usually achieved using an implicit parameter of type ExecutionContext. You can, for instance, use an execution context provided by Play that consists, by default, of a thread pool with one thread per processor: import play.api.libs.concurrent.Execution.Implicits.defaultContext In Java, this execution context is automatically used by default, but you can explicitly supply another one: Promise.promise(() -> { ... }, myExecutionContext); Now that you know how to create asynchronous values, you need to know how to manipulate them. For instance, a sequence of several Future blocks is concurrently executed; how do we define an asynchronous computation depending on another one? You can eventually schedule a computation after an asynchronous value has been resolved using the foreach method: val futureX = Future { 42 }futureX.foreach(x => println(x)) In Java, you can perform the same operation using the onRedeem method: Promise<Integer> futureX = Promise.promise(() -> 42);futureX.onRedeem((x) -> System.out.println(x)); More interestingly, you can eventually transform an asynchronous value using the map method: val futureIsEven = futureX.map(x => x % 2 == 0) The map method exists in Java too: Promise<Boolean> futureIsEven = futureX.map((x) -> x % 2 == 0); If the function you use to transform an asynchronous value returned an asynchronous value too, you would end up with an inconvenient Future[Future[A]] value (or a Promise<Promise<A>> value, in Java). So, use the flatMap method in that case: val futureIsEven = futureX.flatMap(x => Future { x % 2 == 0 }) The flatMap method is also available in Java: Promise<Boolean> futureIsEven = futureX.flatMap((x) -> {Promise.promise(() -> x % 2 == 0)}); The foreach, map, and flatMap functions (or their Java equivalent) all have in common to set a dependency between two asynchronous values; the computation they take as the parameter is always evaluated after the asynchronous computation they are applied to. Another method that is worth mentioning is zip: val futureXY: Future[(Int, Int)] = futureX.zip(futureY) The zip method is also available in Java: Promise<Tuple<Integer, Integer>> futureXY = futureX.zip(futureY); The zip method returns an asynchronous value eventually resolved to a tuple containing the two resolved asynchronous values. It can be thought of as a way to join two asynchronous values without specifying any execution order between them. If you want to join more than two asynchronous values, you can use the zip method several times (for example, futureX.zip(futureY).zip(futureZ).zip(…)), but an alternative is to use the Future.sequence function: val futureXs: Future[Seq[Int]] =Future.sequence(Seq(futureX, futureY, futureZ, …)) This function transforms a sequence of future values into a future sequence value. In Java, this function is named Promise.sequence. In the preceding descriptions, I always used the word eventually, and it has a reason. Indeed, if we use an asynchronous value to manipulate a result sent by a remote machine (such as a database system or a web service), the communication may eventually fail due to some technical issue (for example, if the network is down). For this reason, asynchronous values have error recovery methods; for example, the recover method: futureX.recover { case NonFatal(e) => y } The recover method is also available in Java: futureX.recover((throwable) -> y); The previous code resolves futureX to the value of y in the case of an error. Libraries performing remote calls (such as an HTTP client or a database client) return such asynchronous values when they are implemented in a non-blocking way. You should always be careful whether the libraries you use are blocking or not and keep in mind that, by default, Play is tuned to be efficient with non-blocking APIs. It is worth noting that JDBC is blocking. It means that the majority of Java-based libraries for database communication are blocking. Obviously, once you get a value of type Future[A] (or Promise<A>, in Java), there is no way to get the A value unless you wait (and block) for the value to be resolved. We saw that the map and flatMap methods make it possible to manipulate the future A value, but you still end up with a Future[SomethingElse] value (or a Promise<SomethingElse>, in Java). It means that if your action's code calls an asynchronous API, it will end up with a Future[Result] value rather than a Result value. In that case, you have to use Action.async instead of Action, as illustrated in this typical code example: val asynchronousAction = Action.async { implicit request =>  service.asynchronousComputation().map(result => Ok(result))} In Java, there is nothing special to do; simply make your method return a Promise<Result> object: public static Promise<Result> asynchronousAction() { service.asynchronousComputation().map((result) -> ok(result));} Managing execution contexts Because Play uses explicit concurrency control, controllers are also responsible for using the right execution context to run their action's code. Generally, as long as your actions do not invoke heavy computations or blocking APIs, the default execution context should work fine. However, if your code is blocking, it is recommended to use a distinct execution context to run it. An application with two execution contexts (represented by the black and grey arrows). You can specify in which execution context each action should be executed, as explained in this section Unfortunately, there is no non-blocking standard API for relational database communication (JDBC is blocking). It means that all our actions that invoke code executing database queries should be run in a distinct execution context so that the default execution context is not blocked. This distinct execution context has to be configured according to your needs. In the case of JDBC communication, your execution context should be a thread pool with as many threads as your maximum number of connections. The following diagram illustrates such a configuration: This preceding diagram shows two execution contexts, each with two threads. The execution context at the top of the figure runs database code, while the default execution context (on the bottom) handles the remaining (non-blocking) actions. In practice, it is convenient to use Akka to define your execution contexts as they are easily configurable. Akka is a library used for building concurrent, distributed, and resilient event-driven applications. This article assumes that you have some knowledge of Akka; if that is not the case, do some research on it. Play integrates Akka and manages an actor system that follows your application's life cycle (that is, it is started and shut down with the application). For more information on Akka, visit http://akka.io. Here is how you can create an execution context with a thread pool of 10 threads, in your application.conf file: jdbc-execution-context {thread-pool-executor {   core-pool-size-factor = 10.0   core-pool-size-max = 10}} You can use it as follows in your code: import play.api.libs.concurrent.Akkaimport play.api.Play.currentimplicit val jdbc =  Akka.system.dispatchers.lookup("jdbc-execution-context") The Akka.system expression retrieves the actor system managed by Play. Then, the execution context is retrieved using Akka's API. The equivalent Java code is the following: import play.libs.Akka;import akka.dispatch.MessageDispatcher;import play.core.j.HttpExecutionContext;MessageDispatcher jdbc =   Akka.system().dispatchers().lookup("jdbc-execution-context"); Note that controllers retrieve the current request's information from a thread-local static variable, so you have to attach it to the execution context's thread before using it from a controller's action: play.core.j.HttpExecutionContext.fromThread(jdbc) Finally, forcing the use of a specific execution context for a given action can be achieved as follows (provided that my.execution.context is an implicit execution context): import my.execution.contextval myAction = Action.async {Future { … }} The Java equivalent code is as follows: public static Promise<Result> myAction() {return Promise.promise(   () -> { … },   HttpExecutionContext.fromThread(myExecutionContext));} Does this feels like clumsy code? Buy the book to learn how to reduce the boilerplate! Summary This article detailed a lot of things on the internals of the framework. You now know that Play uses an evented execution model to process requests and serve responses and that it implies that your code should not block the execution thread. You know how to use future blocks and promises to define computation fragments that can be concurrently managed by Play's execution context and how to define your own execution context with a different threading policy, for example, if you are constrained to use a blocking API. Resources for Article: Further resources on this subject: Play! Framework 2 – Dealing with Content [article] So, what is Play? [article] Play Framework: Introduction to Writing Modules [article]
Read more
  • 0
  • 0
  • 5480

article-image-creating-our-first-universe
Packt
22 Sep 2014
18 min read
Save for later

Creating Our First Universe

Packt
22 Sep 2014
18 min read
In this article, by Taha M. Mahmoud, the author of the book, Creating Universes with SAP BusinessObjects, we will learn how to run SAP BO Information Design Tool (IDT), and we will have an overview of the different views that we have in the main IDT window. This will help us understand the main function and purpose for each part of the IDT main window. Then, we will use SAP BO IDT to create our first Universe. In this article, we will create a local project to contain our Universe and other resources related to it. After that, we will use the ODBC connection. Then, we will create a simple Data Foundation layer that will contain only one table (Customers). After that, we will create the corresponding Business layer by creating the associated business objects. The main target of this article is to make you familiar with the Universe creation process from start to end. Then, we will detail each part of the Universe creation process as well as other Universe features. At the end, we will talk about how to get help while creating a new Universe, using the Universe creation wizard or Cheat Sheets. In this article, we will cover the following topics: Running the IDT Getting familiar with SAP BO IDT's interface and views Creating a local project and setting up a relational connection Creating a simple Data Foundation layer Creating a simple Business layer Publishing our first Universe Getting help using the Universe wizard and Cheat Sheets (For more resources related to this topic, see here.) Information Design Tool The Information Design Tool is a client tool that is used to develop BO Universes. It is a new tool released by SAP in BO release 4. There are many SAP BO tools that we can use to create our Universe, such as SAP BO Universe Designer Tool (UDT), SAP BO Universe Builder, and SAP BO IDT. SAP BO Universe designer was the main tool to create Universe since the release of BO 6.x. This tool is still supported in the current SAP BI 4.x release, and you can still use it to create UNV Universes. You need to plan which tool you will use to build your Universe based on the target solution. For example, if you need to connect to a BEX query, you should use the UDT, as the IDT can't do this. On the other hand, if you want to create a Universe query from SAP Dashboard Designer, then you should use the IDT. The BO Universe Builder used to build a Universe from a supported XML metadata file. You can use the Universe conversion wizard to convert the UNV Universe created by the UDT to the UNX Universe created by the IDT. Sometimes, you might get errors or warnings while converting a Universe from .unv to .unx. You need to resolve this manually. It is preferred that you convert a Universe from the previous SAP BO release XI 3.x instead of converting a Universe from an earlier release such as BI XI R2 and BO 6.5. There will always be complete support for the previous release. The main features of the IDT IDT is one of the major new features introduced in SAP BI 4.0. We can now build a Universe that combines data from multiple data sources and also build a dimensional universe on top of an OLAP connection. We can see also a major enhancement in the design environment by empowering the multiuser development environment. This will help designers work in teams and share Universe resources as well as maintain the Universe version control. For more information on the new features introduced in the IDT, refer to the SAP community network at http://wiki.scn.sap.com/ and search for SAP BI 4.0 new features and changes. The Information Design Tool interface We need to cover the following requirements before we create our first Universe: BO client tools are installed on your machine, or you have access to a PC with client tools already installed We have access to a SAP BO server We have a valid username and password to connect to this server We have created an ODBC connection for the Northwind Microsoft Access database Now, to run the IDT, perform the following steps: Click on the Start menu and navigate to All Programs. Click on the SAP BusinessObjects BI platform 4 folder to expand it. Click on the Information Design Tool icon, as shown in the following screenshot: The IDT will open and then we can move on and create our new Universe. In this section, we will get to know the different views that we have in the IDT. We can show or hide any view from the Window menu, as shown in the following screenshot: You can also access the same views from the main window toolbar, as displayed in the following screenshot: Local Projects The Local Projects view is used to navigate to and maintain local project resources, so you can edit and update any project resource, such as the relation connection, Data Foundation, and Business layers from this view. A project is a new concept introduced in the IDT, and there is no equivalent for it in the UDT. We can see the Local Projects main window in the following screenshot: Repository Resources You can access more than one repository using the IDT. However, usually, we work with only one repository at a time. This view will help you initiate a session with the required repository and will keep a list of all the available repositories. You can use repository resources to access and modify the secured connection stored on the BO server. You can also manage and organize published Universes. We can see the Repository Resources main window in the following screenshot: Security Editor Security Editor is used to create data and business security profiles. This can be used to add some security restrictions to be applied on BO users and groups. Security Editor is equivalent to Manage Security under Tools in the UDT. We can see the main Security Editor window in the following screenshot: Project Synchronization The Project Synchronization view is used to synchronize shared projects stored on the repository with your local projects. From this view, you will be able to see the differences between your local projects and shared projects, such as added, deleted, or updated project resources. Project Synchronization is one of the major enhancements introduced in the IDT to overcome the lack of the multiuser development environment in the UDT. We can see the Project Synchronization window in the following screenshot: Check Integrity Problems The Check Integrity Problems view is used to check the Universe's integrity. Check Integrity Problems is equivalent to Check Integrity under Tools in the UDT. Check Integrity Problems is an automatic test for your foundation layer as well as Business layer that will check the Universe's integrity. This wizard will display errors or warnings discovered during the test, and we need to fix them to avoid having any wrong data or errors in our reports. Check Integrity Problems is part of the BO best practices to always check and correct the integrity problems before publishing the Universe. We can see the Check Integrity window in the following screenshot: Creating your first Universe step by step After we've opened the IDT, we want to start creating our NorthWind Universe. We need to create the following three main resources to build a Universe: Data connection: This resource is used to establish a connection with the data source. There are two main types of connections that we can create: relational connection and OLAP connection. Data Foundation: This resource will store the metadata, such as tables, joins, and cardinalities, for the physical layer. The Business layer: This resource will store the metadata for the business model. Here, we will create our business objects such as dimensions, measures, attributes, and filters. This layer is our Universe's interface and end users should be able to access it to build their own reports and analytics by dragging-and-dropping the required objects. We need to create a local project to hold all the preceding Universe resources. The local project is just a container that will store the Universe's contents locally on your machine. Finally, we need to publish our Universe to make it ready to be used. Creating a new project You can think about a project such as a folder that will contain all the resources required by your Universe. Normally, we will start any Universe by creating a local project. Then, later on, we might need to share the entire project and make it available for the other Universe designers and developers as well. This is a folder that will be stored locally on your machine, and you can access it any time from the IDT Local Projects window or using the Open option from the File menu. The resources inside this project will be available only for the local machine users. Let's try to create our first local project using the following steps: Go to the File menu and select New Project, or click on the New icon on the toolbar. Select Project, as shown in the following screenshot: The New Project creation wizard will open. Enter NorthWind in the Project Name field, and leave the Project Location field as default. Note that your project will be stored locally in this folder. Click on Finish, as shown in the following screenshot: Now, you can see the NorthWind empty project in the Local Projects window. You can add resources to your local project by performing the following actions: Creating new resources Converting a .unv Universe Importing a published Universe Creating a new data connection Data connection will store all the required information such as IP address, username, and password to access a specific data source. A data connection will connect to a specific type of data source, and you can use the same data connection to create multiple Data Foundation layers. There are two types of data connection: relational data connection, which is used to connect to the relational database such as Teradata and Oracle, and OLAP connection, which is used to connect to an OLAP cube. To create a data connection, we need to do the following: Right-click on the NorthWind Universe. Select a new Relational Data Connection. Enter NorthWind as the connection name, and write a brief description about this connection. The best practice is to always add a description for each created object. For example, code comments will help others understand why this object has been created, how to use it, and for which purpose they should use it. We can see the first page of the New Relational Connection wizard in the following screenshot: On the second page, expand the MS Access 2007 driver and select ODBC Drivers. Use the NorthWind ODBC connection. Click on Test Connection to make sure that the connection to the data source is successfully established. Click on Next to edit the connection's advanced options or click on Finish to use the default settings, as shown in the following screenshot: We can see the first parameters page of the MS Access 2007 connection in the following screenshot: You can now see the NorthWind connection under the NorthWind project in the Local Projects window. The local relational connection is stored as the .cnx file, while the shared secured connection is stored as a shortcut with the .cns extension. The local connection can be used in your local projects only, and you need to publish it to the BO repository to share it with other Universe designers. Creating a new Data Foundation After we successfully create a relation connection to the Northwind Microsoft Access database, we can now start creating our foundation. Data Foundation is a physical model that will store tables as well as the relations between them (joins). Data Foundation in the IDT is equivalent to the physical data layer in the UDT. To create a new Data Foundation, right-click on the NorthWind project in the Local Projects window, and then select New Data Foundation and perform the following steps: Enter NorthWind as a resource name, and enter a brief description on the NorthWind Data Foundation. Select the Single Source Data Foundation. Select the NorthWind.cnx connection. After that, expand the NorthWind connection, navigate to NorthWind.accdb, and perform the following steps: Navigate to the Customers table and then drag it to an empty area in the Master view window on the right-hand side. Save your Data Foundation. An asterisk (*) will be displayed beside the resource name to indicate that it was modified but not saved. We can see the Connection panel in the NorthWind.dfx Universe resource in the following screenshot: Creating a new Business layer Now, we will create a simple Business layer based on the Customer table that we already added to the NorthWind Data Foundation. Each Business layer should map to one Data Foundation at the end. The Business layer in the IDT is equivalent to the business model in the UDT. To create a new Business layer, right-click on the NorthWind project and then select New Business Layer from the menu. Then, we need to perform the following steps: The first step to create a Business layer is to select the type of the data source that we will use. In our case, select Relational Data Foundation as shown in the following screenshot: Enter NorthWind as the resource name and a brief description for our Business layer. In the next Select Data Foundation window, select the NorthWind Data Foundation from the list. Make sure that the Automatically create folders and objects option is selected, as shown in the following screenshot: Now, you should be able to see the Customer folder under the NorthWind Business layer. If not, just drag it from the NorthWind Data Foundation and drop it under the NorthWind Business layer. Then, save the NorthWind Business Layer, as shown in the following screenshot: A new folder will be created automatically for the Customers table. This folder is also populated with the corresponding dimensions. The Business layer now needs to be published to the BO server, and then, the end users will be able to access it and build their own reports on top of our Universe. If you successfully completed all the steps from the previous sections, the project folder should contain the relational data connection (NorthWind.cnx), the Data Foundation layer (NorthWind.dfx), and the Business layer (NorthWind.blx). The project should appear as displayed in the following screenshot: Saving and publishing the NorthWind Universe We need to perform one last step before we publish our first simple Universe and make it available for the other Universe designers. We need to publish our relational data connection and save it on the repository instead of on our local machine. Publishing a connection will make it available for everyone on the server. Before publishing the Universe, we will replace the NorthWind.cnx resource in our project with a shortcut to the NorthWind secured connection stored on the SAP BO server. After publishing a Universe, other developers as well as business users will be able to see and access it from the SAP BO repository. Publishing a Universe from the IDT is equivalent to exporting a Universe from the UDT (navigate to File | Export). To publish the NorthWind connection, we need to right-click on the NorthWind.cnx resource in the Local Projects window. Then, select Publish Connection to a Repository. As we don't have an active session with the BO server, you will need to initiate one by performing the following steps: Create a new session. Type your <system name: port number> in the System field. Select the Authentication type. Enter your username and password. We have many authentication types such as Enterprise, LDAP, and Windows Active Directory (AD). Enterprise authentication will store user security information inside the BO server. The user credential can only be used to log in to BO, while on the other hand, LDAP will store user security information in the LDAP server, and the user credential can be used to log in to multiple systems in this case. The BO server will send user information to the LDAP server to authenticate the user, and then, it will allow them to access the system in case of successful authentication. The last authentication type is Windows AD, which can also authenticate users using the security information stored inside. There are many authentication types such as Enterprise, LDAP, Windows AD, and SAP. We can see the Open Session window in the following screenshot: The default port number is 6400. A pop-up window will inform you about the connection status (successful here), and it will ask you whether you want to create a shortcut for this connection in the same project folder or not. We should select Yes in our case, because we need to link to the secured published connection instead of the local one. We will not be able to publish our Universe to the BO repository with a local connection. We can see the Publish Connection window in the following screenshot: Finally, we need to link our Data Foundation layer with the secured connection instead of the local connection. To do this, you need to open NorthWind.dfx and replace NorthWind.cnx with the NorthWind.cnc connection. Then, save your Data Foundation resource and right-click on NorthWind.blx. After that, navigate to Publish | To a Repository.... The Check Integrity window will be displayed. Just select Finish. We can see how to change connection in NorthWind.dfx in the following screenshot: After redirecting our Data Foundation layer to the newly created shortcut connection, we need to go to the Local Projects window again, right-click on NorthWind.blx, and publish it to the repository. Our Universe will be saved on the repository with the same name assigned to the Business layer. Congratulations! We have created our first Universe. Finding help while creating a Universe In most cases, you will use the step-by-step approach to create a Universe. However, we have two other ways that we can use to create a universe. In this section, we will try to create the NorthWind Universe again, but using the Universe wizard and Cheat Sheets. The Universe wizard The Universe wizard is just a wizard that will launch the project, connection, Data Foundation, and Business layer wizards in a sequence. We already explained each wizard individually in an earlier section. Each wizard will collect the required information to create the associated Universe resource. For example, the project wizard will end after collecting the required information to create a project, and the project folder will be created as an output. The Universe wizard will launch all the mentioned wizards, and it will end after collecting all the information required to create the Universe. A Universe with all the required resources will be created after finishing this wizard. The Universe wizard is equivalent to the Quick Design wizard in the UDT. You can open the Universe wizard from the welcome screen or from the File menu. As a practice, we can create the NorthWind2 Universe using the Universe wizard: The Universe wizard and welcome screen are new features in SAP BO 4.1. Cheat Sheets Cheat Sheets is another way of getting help while you are building your Universe. They provide step-by-step guidance and detailed descriptions that will help you create your relational Universe. We need to perform the following steps to use Cheat Sheets to build the NorthWind3 Universe, which is exactly the same as the NorthWind Universe that we created earlier in the step-by-step approach: Go to the Help menu and select Cheat Sheets. Follow the steps in the Cheat Sheets window to create the NorthWind3 Universe using the same information that we used to complete the NorthWind Universe. If you face any difficulties in completing any steps, just click on the Click to perform button to guide you. Click on the Click when completed link to move to the next step. Cheat Sheets is a new help method introduced in the IDT, and there is no equivalent for it in the UDT. We can see the Cheat Sheets window in the following screenshot: Summary In this article, we discussed the difference between IDT views, and we tried to get familiar with the IDT user interface. Then, we had an overview of the Universe creation process from start to end. In real-life project environments, the first step is to create a local project to hold all the related Universe resources. Then, we initiated the project by adding the main three resources that are required by each universe. These resources are data connection, Data Foundation, and Business layer. After that, we published our Universe to make it available to other Universe designers and users. This is done by publishing our data connection first and then by redirecting our foundation layer to refer to a shortcut for the shared secured published connection. At this point, we will be able to publish and share our Universe. We also learned how to use the Universe wizard and Cheat Sheets to create a Universe. Resources for Article: Further resources on this subject: Report Data Filtering [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article] SAP BusinessObjects: Customizing the Dashboard [Article]
Read more
  • 0
  • 0
  • 4864
Modal Close icon
Modal Close icon