Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-ruby-strings
Packt
06 Jul 2017
9 min read
Save for later

Ruby Strings

Packt
06 Jul 2017
9 min read
In this article by Jordan Hudgens, the author of the book Comprehensive Ruby Programming, you'll learn about the Ruby String data type and walk through how to integrate string data into a Ruby program. Working with words, sentences, and paragraphs are common requirements in many applications. Additionally you learn how to: Employ string manipulation techniques using core Ruby methods Demonstrate how to work with the string data type in Ruby (For more resources related to this topic, see here.) Using strings in Ruby A string is a data type in Ruby and contains set of characters, typically normal English text (or whatever natural language you're building your program for), that you would write. A key point for the syntax of strings is that they have to be enclosed in single or double quotes if you want to use them in a program. The program will throw an error if they are not wrapped inside quotation marks. Let's walk through three scenarios. Missing quotation marks In this code I tried to simply declare a string without wrapping it in quotation marks. As you can see, this results in an error. This error is because Ruby thinks that the values are classes and methods. Printing strings In this code snippet we're printing out a string that we have properly wrapped in quotation marks. Please note that both single and double quotation marks work properly. It's also important that you do not mix the quotation mark types. For example, if you attempted to run the code: puts "Name an animal' You would get an error, because you need to ensure that every quotation mark is matched with a closing (and matching) quotation mark. If you start a string with double quotation marks, the Ruby parser requires that you end the string with the matching double quotation marks. Storing strings in variables Lastly in this code snippet we're storing a string inside of a variable and then printing the value out to the console. We'll talk more about strings and string interpolation in subsequent sections. String interpolation guide for Ruby In this section, we are going to talk about string interpolation in Ruby. What is string interpolation? So what exactly is string interpolation? Good question. String interpolation is the process of being able to seamlessly integrate dynamic values into a string. Let's assume we want to slip dynamic words into a string. We can get input from the console and store that input into variables. From there we can call the variables inside of a pre-existing string. For example, let's give a sentence the ability to change based on a user's input. puts "Name an animal" animal = gets.chomp puts "Name a noun" noun= gets.chomp p "The quick brown #{animal} jumped over the lazy #{noun} " Note the way I insert variables inside the string? They are enclosed in curly brackets and are preceded by a # sign. If I run this code, this is what my output will look: So, this is how you insert values dynamically in your sentences. If you see sites like Twitter, it sometimes displays personalized messages such as: Good morning Jordan or Good evening Tiffany. This type of behavior is made possible by inserting a dynamic value in a fixed part of a string and leverages string interpolation. Now, let's use single quotes instead of double quotes, to see what happens. As you'll see, the string was printed as it is without inserting the values for animal and noun. This is exactly what happens when you try using single quotes—it prints the entire string as it is without any interpolation. Therefore it's important to remember the difference. Another interesting aspect is that anything inside the curly brackets can be a Ruby script. So, technically you can type your entire algorithm inside these curly brackets, and Ruby will run it perfectly for you. However, it is not recommended for practical programming purposes. For example, I can insert a math equation, and as you'll see it prints the value out. String manipulation guide In this section we are going to learn about string manipulation along with a number of examples of how to integrate string manipulation methods in a Ruby program. What is string manipulation? So what exactly is string manipulation? It's the process of altering the format or value of a string, usually by leveraging string methods. String manipulation code examples Let's start with an example. Let's say I want my application to always display the word Astros in capital letters. To do that, I simply write: "Astros".upcase Now if I always a string to be in lower case letters I can use the downcase method, like so: "Astros".downcase Those are both methods I use quite often. However there are other string methods available that we also have at our disposal. For the rare times when you want to literally swap the case of the letters you can leverage the swapcase method: "Astros".swapcase And lastly if you want to reverse the order of the letters in the string we can call the reverse method: "Astros".reverse These methods are built into the String data class and we can call them on any string values in Ruby. Method chaining Another neat thing we can do is join different methods together to get custom output. For example, I can run: "Astros".reverse.upcase The preceding code displays the value SORTSA. This practice of combining different methods with a dot is called method chaining. Split, strip, and join guides for strings In this section, we are going to walk through how to use the split and strip methods in Ruby. These methods will help us clean up strings and convert a string to an array so we can access each word as its own value. Using the strip method Let's start off by analyzing the strip method. Imagine that the input you get from the user or from the database is poorly formatted and contains white space before and after the value. To clean the data up we can use the strip method. For example: str = " The quick brown fox jumped over the quick dog " p str.strip When you run this code, the output is just the sentence without the white space before and after the words. Using the split method Now let's walk through the split method. The split method is a powerful tool that allows you to split a sentence into an array of words or characters. For example, when you type the following code: str = "The quick brown fox jumped over the quick dog" p str.split You'll see that it converts the sentence into an array of words. This method can be particularly useful for long paragraphs, especially when you want to know the number of words in the paragraph. Since the split method converts the string into an array, you can use all the array methods like size to see how many words were in the string. We can leverage method chaining to find out how many words are in the string, like so: str = "The quick brown fox jumped over the quick dog" p str.split.size This should return a value of 9, which is the number of words in the sentence. To know the number of letters, we can pass an optional argument to the split method and use the format: str = "The quick brown fox jumped over the quick dog" p str.split(//).size And if you want to see all of the individual letters, we can remove the size method call, like this: p str.split(//) And your output should look like this: Notice, that it also included spaces as individual characters which may or may not be what you want a program to return. This method can be quite handy while developing real-world applications. A good practical example of this method is Twitter. Since this social media site restricts users to 140 characters, this method is sure to be a part of the validation code that counts the number of characters in a Tweet. Using the join method We've walked through the split method, which allows you to convert a string into a collection of characters. Thankfully, Ruby also has a method that does the opposite, which is to allow you to convert an array of characters into a single string, and that method is called join. Let's imagine a situation where we're asked to reverse the words in a string. This is a common Ruby coding interview question, so it's an important concept to understand since it tests your knowledge of how string work in Ruby. Let's imagine that we have a string, such as: str = "backwards am I" And we're asked to reverse the words in the string. The pseudocode for the algorithm would be: Split the string into words Reverse the order of the words Merge all of the split words back into a single string We can actually accomplish each of these requirements in a single line of Ruby code. The following code snippet will perform the task: str.split.reverse.join(' ') This code will convert the single string into an array of strings, for the example it will equal ["backwards", "am", "I"]. From there it will reverse the order of the array elements, so the array will equal: ["I", "am", "backwards"]. With the words reversed, now we simply need to merge the words into a single string, which is where the join method comes in. Running the join method will convert all of the words in the array into one string. Summary In this article, we were introduced to the string data type and how it can be utilized in Ruby. We analyzed how to pass strings into Ruby processes by leveraging string interpolation. We also learned the methods of basic string manipulation and how to find and replace string data. We analyzed how to break strings into smaller components, along with how to clean up string based data. We even introduced the Array class in this article. Resources for Article: Further resources on this subject: Ruby and Metasploit Modules [article] Find closest mashup plugin with Ruby on Rails [article] Building tiny Web-applications in Ruby using Sinatra [article]
Read more
  • 0
  • 0
  • 16622

article-image-oracle-e-business-suite-creating-items-inventory
Packt
18 Aug 2011
7 min read
Save for later

Oracle E-Business Suite: Creating Items in Inventory

Packt
18 Aug 2011
7 min read
  Oracle E-Business Suite 12 Financials Cookbook Take the hard work out of your daily interactions with E-Business Suite financials by using the 50+ recipes from this cookbook         Read more about this book       Oracle E-Business Suite 12 Financials is a solution that provides out-of-the-box features to meet global financial reporting and tax requirements with one accounting, tax, banking, and payments model, and makes it easy to operate shared services across businesses and regions. In this article by Yemi Onigbode, author of Oracle E-Business Suite 12 Financials Cookbook, we will start with recipes for creating Items. We will cover: Creating Items Exploring Item attributes Creating Item templates Exploring Item controls (For more resources on Oracle, see here.) Introduction An organization's operations include the buying and selling of products and services. Items can represent the products and services that are purchased and sold in an organization. Let's start by looking at the Item creation process. The following diagram details the process for creating Items: The Item Requester (the person who requests an Item) completes an Item Creation Form, which should contain information such as: Costing information Pricing Information Item and Product Categories Details of some of the Item attributes The inventory organization details Once complete, a message is sent to the Master Data Manager (the person who maintains the master data) to create the Item. The message could be sent by fax, e-mail, and so on. The Master Data Manager reviews the form and enters the details of the Item into Oracle E-Business Suite by creating the Item. Once complete, a message is sent to the Item Requester. The Item Requester reviews the Item setup on the system. Let's look at how Items are created and explore the underlying concepts concerning the creation of Items. Creating Items Oracle Inventory provides us with the functionality to create Items. Sets of attributes are assigned to an Item. The attributes define the characteristics of the Item. A group of attributes values defines a template, and a template can be assigned to an Item to automatically define the set of attribute values. An Item template defines the Item Type. For example, a Finished Good template will identify certain characteristics that define the Item as a finished good, with attributes such as "Inventory Item" and "Stockable" with a value of "Yes". Let's look at how to create an Item in Oracle Inventory. We will also assign a Finished Good template to the Item. Getting ready Log in to Oracle E-Business Suite R12 with the username and password assigned to you by the System Administrator. If you are working on the Vision demonstration database, you can use OPERATIONS/WELCOME as the USERNAME/PASSWORD: Select the Inventory Responsibility. Select the V1 Inventory Organization. How to do it... Let's list the steps required to create an Item: Navigate to Items | Master Items. Please note that Items are defined in the Master Organization. Enter the Item code, for example, PRD20001. Enter a description for the Item: Select Copy From from the tools menu (or press Alt+T). We are going to copy the attributes from the Finished Good template: We can also copy attributes from an existing Item. Enter Finished Good and click on the Apply button (or press Alt+A) and click on the Done button. Save the Item definition by clicking on the Save icon (or press Ctrl+S). How it works... Items contain attributes and attributes contain information about an Item. Attributes can be controlled centrally at the Master Organization level or at the Inventory Organization level. There's more... Once the Item is created, we need to assign it to a category and an inventory organization. Assigning Items to inventory organizations For us to be able to perform transactions with the Item in the inventory, we need to assign the Item to an inventory organization. We can also use the organization Item form to change the attributes at the organization level. For example, an Item may be classified as raw materials in one organization and finished goods in another organization. From the Tools menu, select Organization Assignment. Select the inventory organization for the Item. For example, A1–ACME Corporation. Click on the Assigned checkbox. Save the assignment. Assigning Items to categories When an Item is created, it is assigned to a default category. However, you may want to perform transactions with the Item in more than one functional area, such as Inventory, Purchasing, Cost Management, Service, Engineering, and so on. You need to assign the Item to the relevant functional area. A category within a functional area is a logical classification of Items with similar characteristics. From the Tools menu, select Categories. Select the Categories Set, Control Level, and the Category combination to assign to the Item: Save the assignment. Exploring Item attributes There are more than 250 Item attributes grouped into 17 main attribute groups. In this recipe, we will explore the main groups that are used within the financial modules. How to do it... Let's explore some Item attributes: Search for the Finished Good Item by navigating to Items | Master Items: Click on the Find icon. You then enter the Item code and click on the Find button to search for the Item. Select the tabs to review each of the attributes group: In the Main tab, check that the Item Status is Active. We can also enter a long description in the Long Description field. The default value of the primary Unit of Measure (UOM) can be defined in the INV: Default Primary Unit of Measure profile option. The value can be overwritten when creating the Item. The Primary UOM is the default UOM used in other modules. For example, in Receivables it is used for invoices and credit memos. In the Inventory tab, check that the following are enabled: Inventory Item: It enables the Item to be transacted in Inventory. The default Inventory Item category is automatically assigned to the Item, if enabled. Stockable: It enables the Item to be stocked in Inventory. Transactable: Order Management uses this flag to determine how returns are transacted in Inventory. Reservable: It enables the reservation of Items during transactions. For example, during order entry in Order Management. In the Costing tab, check that the following are enabled: Costing: Enables the accounting for Item costs. It can be overridden in the Cost Management module, if average costing is used. Cost of Goods Sold Account: The cost of goods sold account is entered. This is a general ledger account. The value defaults from the Organization parameters. In the Purchasing tab, enter a Default Buyer for the purchase orders, a List Price, and an Expense Account. Check that the following are enabled: Purchased: It enables us to purchase and receive the Item. Purchasable: It enables us to create a Purchase Order for the Item. Allow Description Update: It enables us to change the description of the Item when raising the Purchase Order. RFQ Required: Set this value to Yes to enable us to require a quotation for this Item. Taxable: Set this value to Yes with the Input Tax Classification Code as VAT–15%. This can be used with the default rules in E-Tax. Invoice Matching: Receipt Required–Yes. This is to allow for three-way matching. In the Receiving tab, review the controls. In the Order Management tab, check that the following are enabled: Customer Ordered: This enables us to define prices for an Item assigned to a price list. Customer Orders Enabled: This enables us to sell the Item. Shippable: This enables us to ship the Item to the Customer. Internal Ordered: This enables us to order an Item via internal requisitions. Internal Orders Enabled: This enables us to temporarily exclude an Item from internal requisitions. OE Transactable: This is used for demand management of an Item. In the Invoicing tab, enter values for the Accounting Rule, Invoicing Rule, Output Tax Classification Code, and Payment Terms. Enter the Sales Account code and check that the Invoiceable Item and Invoice Enabled checkboxes are enabled.
Read more
  • 0
  • 0
  • 16621

article-image-creating-mutable-and-immutable-classes-swift
Packt
20 Jan 2016
8 min read
Save for later

Creating Mutable and Immutable Classes in Swift

Packt
20 Jan 2016
8 min read
In this article by Gastón Hillar, author of the book Object-Oriented Programming with Swift, we will learn how to create mutable and immutable classes in Swift. (For more resources related to this topic, see here.) Creating mutable classes So far, we worked with different type of properties. When we declare stored instance properties with the var keyword, we create a mutable instance property, which means that we can change their values for each new instance we create. When we create an instance of a class that defines many public-stored properties, we create a mutable object, which is an object that can change its state. For example, let's think about a class named MutableVector3D that represents a mutable 3D vector with three public-stored properties: x, y, and z. We can create a new MutableVector3D instance and initialize the x, y, and z attributes. Then, we can call the sum method with the delta values for x, y, and z as arguments. The delta values specify the difference between the existing and new or desired value. So, for example, if we specify a positive value of 30 in the deltaX parameter, it means we want to add 30 to the X value. The following lines declare the MutableVector3D class that represents the mutable version of a 3D vector in Swift: public class MutableVector3D { public var x: Float public var y: Float public var z: Float init(x: Float, y: Float, z: Float) { self.x = x self.y = y self.z = z } public func sum(deltaX: Float, deltaY: Float, deltaZ: Float) { x += deltaX y += deltaY z += deltaZ } public func printValues() { print("X: (self.x), Y: (self.y), Z: (self.z))") } } Note that the declaration of the sum instance method uses the func keyword, specifies the arguments with their types enclosed in parentheses, and then declares the body for the method enclosed in curly brackets. The public sum instance method receives the delta values for x, y, and z (deltaX, deltaY and deltaZ) and mutates the object, which means that the method changes the values of x, y, and z. The public printValues method prints the values of the three instance-stored properties: x, y, and z. The following lines create a new MutableVector3D instance method called myMutableVector, initialized with the values for the x, y, and z properties. Then, the code calls the sum method with the delta values for x, y, and z as arguments and finally calls the printValues method to check the new values after the object mutated with the call to the sum method: var myMutableVector = MutableVector3D(x: 30, y: 50, z: 70) myMutableVector.sum(20, deltaY: 30, deltaZ: 15) myMutableVector.printValues() The results of the execution in the Playground are shown in the following screenshot: The initial values for the myMutableVector fields are 30 for x, 50 for y, and 70 for z. The sum method changes the values of the three instance-stored properties; therefore, the object state mutates as follows: myMutableVector.X mutates from 30 to 30 + 20 = 50 myMutableVector.Y mutates from 50 to 50 + 30 = 80 myMutableVector.Z mutates from 70 to 70 + 15 = 85 The values for the myMutableVector fields after the call to the sum method are 50 for x, 80 for y, and 85 for z. We can say that the method mutated the object's state; therefore, myMutableVector is a mutable object and an instance of a mutable class. It's a very common requirement to generate a 3D vector with all the values initialized to 0—that is, x = 0, y = 0, and z = 0. A 3D vector with these values is known as an origin vector. We can add a type method to the MutableVector3D class named originVector to generate a new instance of the class initialized with all the values in 0. Type methods are also known as class or static methods in other object-oriented programming languages. It is necessary to add the class keyword before the func keyword to generate a type method instead of an instance. The following lines define the originVector type method: public class func originVector() -> MutableVector3D { return MutableVector3D(x: 0, y: 0, z: 0) } The preceding method returns a new instance of the MutableVector3D class with 0 as the initial value for all the three elements. The following lines call the originVector type method to generate a 3D vector, the sum method for the generated instance, and finally, the printValues method to check the values for the three elements on the Playground: var myMutableVector2 = MutableVector3D.originVector() myMutableVector2.sum(5, deltaY: 10, deltaZ: 15) myMutableVector2.printValues() The following screenshot shows the results of executing the preceding code in the Playground: Creating immutable classes Mutability is very important in object-oriented programming. In fact, whenever we expose mutable properties, we create a class that will generate mutable instances. However, sometimes a mutable object can become a problem and in certain situations, we want to avoid the objects to change their state. For example, when we work with concurrent code, an object that cannot change its state solves many concurrency problems and avoids potential bugs. For example, we can create an immutable version of the previous MutableVector3D class to represent an immutable 3D vector. The new ImmutableVector3D class has three immutable instance properties declared with the let keyword instead of the previously used var[SI1]  keyword: x, y, and z. We can create a new ImmutableVector3D instance and initialize the immutable instance properties. Then, we can call the sum method with the delta values for x, y, and z as arguments. The sum public instance method receives the delta values for x, y, and z (deltaX, deltaY, and deltaZ), and returns a new instance of the same class with the values of x, y, and z initialized with the results of the sum. The following lines show the code of the ImmutableVector3D class: public class ImmutableVector3D { public let x: Float public let y: Float public let z: Float init(x: Float, y: Float, z: Float) { self.x = x self.y = y self.z = z } public func sum(deltaX: Float, deltaY: Float, deltaZ: Float) -> ImmutableVector3D { return ImmutableVector3D(x: x + deltaX, y: y + deltaY, z: z + deltaZ) } public func printValues() { print("X: (self.x), Y: (self.y), Z: (self.z))") } public class func equalElementsVector(initialValue: Float) -> ImmutableVector3D { return ImmutableVector3D(x: initialValue, y: initialValue, z: initialValue) } public class func originVector() -> ImmutableVector3D { return equalElementsVector(0) } } In the new ImmutableVector3D class, the sum method returns a new instance of the ImmutableVector3D class—that is, the current class. In this case, the originVector type method returns the results of calling the equalElementsVector type method with 0 as an argument. The equalElementsVector type method receives an initialValue argument for all the elements of the 3D vector, creates an instance of the actual class, and initializes all the elements with the received unique value. The originVector type method demonstrates how we can call another type method within a type method. Note that both the type methods specify the returned type with -> followed by the type name (ImmutableVector3D) after the arguments enclosed in parentheses. The following line shows the declaration for the equalElementsVector type method with the specified return type: public class func equalElementsVector(initialValue: Float) -> ImmutableVector3D { The following lines call the originVector type method to generate an immutable 3D vector named vector0 and the sum method for the generated instance and save the returned instance in the new vector1 variable. The call to the sum method generates a new instance and doesn't mutate the existing object: var vector0 = ImmutableVector3D.originVector() var vector1 = vector0.sum(5, deltaX: 10, deltaY: 15) vector1.printValues() The code doesn't allow the users of the ImmutableVector3D class to change the values of the x, y, and z properties declared with the let keyword. The code doesn't compile if you try to assign a new value to any of these properties after they were initialized. Thus, we can say that the ImmutableVector3D class is 100 percent immutable. Finally, the code calls the printValues method for the returned instance (vector1) to check the values for the three elements on the Playground, as shown in the following screenshot: The immutable version adds an overhead compared with the mutable version because it is necessary to create a new instance of the class as a result of calling the sum method. The previously analyzed mutable version just changed the values for the attributes, and it wasn't necessary to generate a new instance. Obviously, the immutable version has both a memory and performance overhead. However, when we work with concurrent code, it makes sense to pay for the extra overhead to avoid potential issues caused by mutable objects. We just have to make sure we analyze the advantages and tradeoffs in order to decide which is the most convenient way of coding our specific classes. Summary In this article, we learned how to create mutable and immutable classes in Swift. Resources for Article: Further resources on this subject: Exploring Swift[article] The Swift Programming Language[article] Playing with Swift[article]
Read more
  • 0
  • 0
  • 16618

article-image-blender-25-detailed-render-earth-space
Packt
25 May 2011
10 min read
Save for later

Blender 2.5: Detailed Render of the Earth from Space

Packt
25 May 2011
10 min read
Blender 2.5 HOTSHOT Challenging and fun projects that will push your Blender skills to the limit Our purpose is to create a very detailed view of the earth from space. By detailed, we mean that it includes land, oceans, and clouds, and not only the color and specular reflection, but also the roughness they seem to have, when seen from space. For this project, we are going to perform some work with textures and get them properly set up for our needs (and also for Blender's way of working). What Does It Do? We will create a nice image of the earth resembling the beautiful pictures that are taken from orbiting of the earth, showing the sun rising over the rim of the planet. For this, we will need to work carefully with some textures, set up a basic scene, and create a fairly complex setup of nodes for compositing the final result. In our final image, we will get very nice effects, such as the volumetric effect of the atmosphere that we can see round its rim, the strong highlight of the sun when rising over the rim of the earth, and the very calm, bluish look of the dark part of the earth when lit by the moon. Why Is It Awesome? With this project, we are going to understand how important it is to have good textures to work with. Having the right textures for the job saves lots of time when producing a high-quality rendered image. Not only are we going to work with some very good textures that are freely available on the Internet, but we are also going to perform some hand tweaking to get them tuned exactly as we need them. This way we can also learn how much time can be saved by just doing some preprocessing on the textures to create finalized maps that will be fed directly to the material, without having to resort to complex tricks that would only cause us headaches. One of the nicest aspects of this project is that we are going to see how far we take a very simple scene by using the compositor in Blender. We are definitely going to learn some useful tricks for compositing. Your Hotshot Objectives This project will be tackled in five parts: Preprocessing the textures Object setup Lighting setup Compositing preparation Compositing Mission Checklist The very key for the success of our project is getting the right set of quality images at a sufficiently high resolution. Let's go to www.archive.org and search for www.oera.net/How2.htm on the 'wayback machine'. Choose the snapshot from the Apr 18, 2008 link. Click on the image titled Texture maps of the Earth and Planets. Once there, let's download these images: Earth texture natural colors Earth clouds Earth elevation/bump Earth water/land mask Remember to save the high-resolution version of the images, and put them in the tex folder, inside the project's main folder. We will also need to use Gimp to perform the preprocessing of the textures, so let's make sure to have it installed. We'll be working with version 2.6. Preprocessing the Textures The textures we downloaded are quite good, both in resolution and in the way they clearly separate each aspect of the shading of the earth. There is a catch though—using the clouds, elevation, and water/land textures as they are will cause us a lot of headache inside Blender. So let's perform some better basic preprocessing to get finalized and separated maps for each channel of the shader that will be created. Engage Thrusters For each one of the textures that we're going to work on, let's make sure to get the previous one closed to avoid mixing the wrong textures. Clouds Map Drag the EarthClouds_2500x1250.jpg image from the tex folder into the empty window of Gimp to get it loaded. Now locate the Layers window and right-click on the thumbnail of the Background layer, and select the entry labeled Add Layer Mask... from the menu. In the dialog box, select the Grayscale copy of layer option. Once the mask is added to the layer, the black part of the texture should look transparent. If we take a look at the image after adding the mask, we'll notice the clouds seem to have too much transparency. To solve this, we will perform some adjustment directly on the mask of the layer. Go to the Layers window and click on the thumbnail of the mask (the one to the right-hand side) to make it active (its border should become white). Then go to the main window (the one containing the image) and go to Colors | Curves.... In the Adjust Color Curves dialog, add two control points and get the curve shown in the next screenshot: The purpose of this curve is to get the light gray pixels of the mask to become lighter and the dark ones to get darker; the strong slope between the two control points will cause the border of the mask to be sharper. Make sure that the Value channel is selected and click on OK. Now let's take a look at the image and see how strong the contrast of the image is and how well defined the clouds are now. Finally, let's go to Image| Mode| RGB to set the internal data format for the image to a safe format (thus avoiding the risk of having Blender confused by it). Now we only need to go to File| Save A Copy... and save it as EarthClouds.png in the tex folder of the project. In the dialogs asking for confirmation, make sure to tell Gimp to apply the layer mask (click on Export in the first dialog). For the settings of the PNG file, we can use the default values. Let's close the current image in Gimp and get the main window empty in order to start working on the next texture. Specular Map Let's start by dragging the image named EarthMask_2500x1250.jpg onto the main window of Gimp to get it open. Then drag the image EarthClouds_2500x1250.jpg over the previous one to get it added as a separate layer in Gimp. Now, we need to make sure that the images are correctly aligned. To do this, let's go to View| Zoom| 4:1 (400%), to be able to move the layer with pixel precision easily. Now go to the bottom right-hand side corner of the window and click-and-drag over the four-arrows icon until the part of the image shown in the viewport is one of the corners. After looking at the right place, let's go to the Toolbox and activate the Move tool. Finally, we just need to drag the clouds layer so that its corner exactly matches the corner of the water/land image. Then let's switch to another zoom level by going to View| Zoom| 1:4 (25%). Now let's go to the Layers window, select the EarthClouds layer, and set its blending mode to Multiply (Mode drop-down, above the layers list). Now we just need to go to the main window and go to Colors| Invert. Finally, let's switch the image to RGB mode by going to Image| Mode| RGB and we are done with the processing. Remember to save the image as EarthSpecMap.jpg in the tex folder of the project and close it in Gimp. The purpose of creating this specular map is to correctly mix the specularity of the ocean (full) with one of the clouds that is above the ocean (null). This way, we get a correct specularity, both in the ocean and in the clouds. If we just used the water or land mask to control specularity, then the clouds above the ocean would have specular reflection, which is wrong. Bump Map The bump map controls the roughness of the material; this one is very important as it adds a lot of detail to the final render without having to create actual geometry to represent it. First, drag the EarthElevation_2500x1250.jpg to the main window of Gimp to get it open. Then let's drag the EarthClouds_2500x1250.jpg image over the previous one, so that it gets loaded as a layer above the first one. Now zoom in by going to View| Zoom| 4:1 (400%). Drag the image so that you are able to see one of its corners and use the move tool to get the clouds layer exactly matching the elevation layer. Then switch back to a wider view by going to View| Zoom| 1:4 (25%). Now it's time to add a mask to the clouds layer. Right-click on the clouds layer and select the Add Layer Mask... entry from the menu. Then select the Grayscale copy of layer option in the dialog box and click Add. What we have thus far is a map that defines how intense the roughness of the surface in each point will be. But there's is a problem: The clouds are as bright as or even brighter than the Andes and the Himalayas, which means the render process will distort them quite a lot. Since we know that the intensity of the roughness on the clouds must be less, let's perform another step to get the map corrected accordingly. Let's select the left thumbnail of the clouds layer (color channel of the layer), then go to the main window and open the color levels using the Levels tool by going to Colors| Levels.... In the Output Levels part of the dialog box, let's change the value 255 (on the right-hand side) to 66 and then click on OK. Now we have a map that clearly gives a stronger value to the highest mounts on earth than to the clouds, which is exactly what we needed. Finally, we just need to change the image mode to RGB (Image| Mode| RGB) and save it as EarthBumpMap.jpg in the tex folder of the project. Notice that we are mixing the bump maps of the clouds and the mountains. The reason for this is that working with separate bump maps will get us into a very tricky situation when working inside Blender; definitely, working with a single bump map is way easier than trying to mix two or more. Now we can close Gimp, since we will work exclusively within Blender from now on. Objective Complete - Mini Debriefing This part of the project was just a preparation of the textures. We must create these new textures for three reasons: To get the clouds' texture having a proper alpha channel; this will save us trouble when working with it in Blender. To control the spec map properly, in the regions where there are clouds, as the clouds must not have specular reflection. To create a single, unified bump map for the whole planet. This will save us lots of trouble when controlling the Normal channel of the material in Blender. Notice that we are using the term "bump map" to refer to a texture that will be used to control the "normal" channel of the material. The reason to not call it "normal map" is because a normal map is a special kind of texture that isn't coded in grayscale, like our current texture.
Read more
  • 0
  • 0
  • 16611

article-image-building-movie-api-express
Packt
18 Feb 2016
22 min read
Save for later

Building A Movie API with Express

Packt
18 Feb 2016
22 min read
We will build a movie API that allows you to add actor and movie information to a database and connect actors with movies, and vice versa. This will give you a hands-on feel for what Express.js offers. We will cover the following topics in this article: Folder structure and organization Responding to CRUD operations Object modeling with Mongoose Generating unique IDs Testing (For more resources related to this topic, see here.) Folder structure and organization Folder structure is a very controversial topic. Though there are many clean ways to structure your project, we will use the following code for the remainder of our article: article +-- app.js +-- package.json +-- node_modules ¦+-- npm package folders +-- src ¦+-- lib ¦+-- models ¦+-- routes +-- test Let's take a look this at in detail: app.js: It is conventional to have the main app.js file in the root directory. The app.js is the entry point of our application and will be used to launch the server. package.json: As with any Node.js app, we have package.json in the root folder specifying our application name and version as well as all of our npm dependencies. node_modules: The node_modules folder and its content are generated via npm installation and should usually be ignored in your version control of choice because it depends on the platform the app runs on. Having said that, according to the npm FAQ, it is probably better to commit the node_modules folder as well. Check node_modules into git for things you deploy, such as websites and apps. Do not check node_modules into git for libraries and modules intended to be reused. Refer to the following article to read more about the rationale behind this: http://www.futurealoof.com/posts/nodemodules-in-git.html src: The src folder contains all the logic of the application. lib: Within the src folder, we have the lib folder, which contains the core of the application. This includes the middleware, routes, and creating the database connection. models: The models folder contains our mongoose models, which defines the structure and logic of the models we want to manipulate and save. routes: The routes folder contains the code for all the endpoints the API is able to serve. test: The test folder will contain our functional tests using Mocha as well as two other node modules, should and supertest, to make it easier to aim for 100 percent coverage. Responding to CRUD operations The term CRUD refers to the four basic operations one can perform on data: create, read, update, and delete. Express gives us an easy way to handle those operations by supporting the basic methods GET, POST, PUT, and DELETE: GET: This method is used to retrieve the existing data from the database. This can be used to read single or multiple rows (for SQL) or documents (for MongoDB) from the database. POST: This method is used to write new data into the database, and it is common to include a JSON payload that fits the data model. PUT: This method is used to update existing data in the database, and a JSON payload that fits the data model is often included for this method as well. DELETE: This method is used to remove an existing row or document from the database. Express 4 has dramatically changed from version 3. A lot of the core modules have been removed in order to make it even more lightweight and less dependent. Therefore, we have to explicitly require modules when needed. One helpful module is body-parser. It allows us to get a nicely formatted body when a POST or PUT HTTP request is received. We have to add this middleware before our business logic in order to use its result later. We write the following in src/lib/parser.js: var bodyParser = require('body-parser'); module;exports = function(app) { app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); }; The preceding code is then used in src/lib/app.js as follows: var express = require('express'); var app = express(); require('./parser')(app); module.exports = app; The following example allows you to respond to a GET request on http://host/path. Once a request hits our API, Express will run it through the necessary middleware as well as the following function: app.get('/path/:id', function(req, res, next) { res.status(200).json({ hello: 'world'}); }); The first parameter is the path we want to handle a GET function. The path can contain parameters prefixed with :. Those path parameters will then be parsed in the request object. The second parameter is the callback that will be executed when the server receives the request. This function gets populated with three parameters: req, res, and next. The req parameter represents the HTTP request object that has been customized by Express and the middlewares we added in our applications. Using the path http://host/path/:id, suppose a GET request is sent to http://host/path/1?a=1&b=2. The req object would be the following: { params: { id: 1 }, query: { a: 1, b: 2 } } The params object is a representation of the path parameters. The query is the query string, which are the values stated after ? in the URL. In a POST request, there will often be a body in our request object as well, which includes the data we wish to place in our database. The res parameter represents the response object for that request. Some methods, such as status() or json(), are provided in order to tell Express how to respond to the client. Finally, the next() function will execute the next middleware defined in our application. Retrieving an actor with GET Retrieving a movie or actor from the database consists of submitting a GET request to the route: /movies/:id or /actors/:id. We will need a unique ID that refers to a unique movie or actor: app.get('/actors/:id', function(req, res, next) { //Find the actor object with this :id //Respond to the client }); Here, the URL parameter :id will be placed in our request object. Since we call the first variable in our callback function req as before, we can access the URL parameter by calling req.params.id. Since an actor may be in many movies and a movie may have many actors, we need a nested endpoint to reflect this as well: app.get('/actors/:id/movies', function(req, res, next) { //Find all movies the actor with this :id is in //Respond to the client }); If a bad GET request is submitted or no actor with the specified ID is found, then the appropriate status code bad request 400 or not found 404 will be returned. If the actor is found, then success request 200 will be sent back along with the actor information. On a success, the response JSON will look like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "AxiomZen", "birth_year": 2012, "__v": 0, "movies": [] } Creating a new actor with POST In our API, creating a new movie in the database involves submitting a POST request to /movies or /actors for a new actor: app.post('/actors', function(req, res, next) { //Save new actor //Respond to the client }); In this example, the user accessing our API sends a POST request with data that would be placed into request.body. Here, we call the first variable in our callback function req. Thus, to access the body of the request, we call req.body. The request body is sent as a JSON string; if an error occurs, a 400 (bad request) status would be sent back. Otherwise, a 201 (created) status is sent to the response object. On a success request, the response will look like the following: { "__v": 0, "id": 1, "name": "AxiomZen", "birth_year": 2012, "_id": "551322589911fefa1f656cc5", "movies": [] } Updating an actor with PUT To update a movie or actor entry, we first create a new route and submit a PUT request to /movies/:id or /actors /:id, where the id parameter is unique to an existing movie/actor. There are two steps to an update. We first find the movie or actor by using the unique id and then we update that entry with the body of the request object, as shown in the following code: app.put('/actors/:id', function(req, res) { //Find and update the actor with this :id //Respond to the client }); In the request, we would need request.body to be a JSON object that reflects the actor fields to be updated. The request.params.id would still be a unique identifier that refers to an existing actor in the database as before. On a successful update, the response JSON looks like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "Axiomzen", "birth_year": 99, "__v": 0, "movies": [] } Here, the response will reflect the changes we made to the data. Removing an actor with DELETE Deleting a movie is as simple as submitting a DELETE request to the same routes that were used earlier (specifying the ID). The actor with the appropriate id is found and then deleted: app.delete('/actors/:id', function(req, res) { //Remove the actor with this :id //Respond to the client }); If the actor with the unique id is found, it is then deleted and a response code of 204 is returned. If the actor cannot be found, a response code of 400 is returned. There is no response body for a DELETE() method; it will simply return the status code of 204 on a successful deletion. Our final endpoints for this simple app will be as follows: //Actor endpoints app.get('/actors', actors.getAll); app.post('/actors', actors.createOne); app.get('/actors/:id', actors.getOne); app.put('/actors/:id', actors.updateOne); app.delete('/actors/:id', actors.deleteOne) app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); //Movie endpoints app.get('/movies', movies.getAll); app.post('/movies', movies.createOne); app.get('/movies/:id', movies.getOne); app.put('/movies/:id', movies.updateOne); app.delete('/movies/:id', movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); In Express 4, there is an alternative way to describe your routes. Routes that share a common URL, but use a different HTTP verb, can be grouped together as follows: app.route('/actors') .get(actors.getAll) .post(actors.createOne); app.route('/actors/:id') .get(actors.getOne) .put(actors.updateOne) .delete(actors.deleteOne); app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); app.route('/movies') .get(movies.getAll) .post(movies.createOne); app.route('/movies/:id') .get(movies.getOne) .put(movies.updateOne) .delete(movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); Whether you prefer it this way or not is up to you. At least now you have a choice! We have not discussed the logic of the function being run for each endpoint. We will get to that shortly. Express allows us to easily CRUD our database objects, but how do we model our objects? Object modeling with Mongoose Mongoose is an object data modeling library (ODM) that allows you to define schemas for your data collections. You can find out more about Mongoose on the project website: http://mongoosejs.com/. To connect to a MongoDB instance using the mongoose variable, we first need to install npm and save Mongoose. The save flag automatically adds the module to your package.json with the latest version, thus, it is always recommended to install your modules with the save flag. For modules that you only need locally (for example, Mocha), you can use the savedev flag. For this project, we create a new file db.js under /src/lib/db.js, which requires Mongoose. The local connection to the mongodb database is made in mongoose.connect as follows: var mongoose = require('mongoose'); module.exports = function(app) { mongoose.connect('mongodb://localhost/movies', { mongoose: { safe: true } }, function(err) { if (err) { return console.log('Mongoose - connection error:', err); } }); return mongoose; }; In our movies database, we need separate schemas for actors and movies. As an example, we will go through object modeling in our actor database /src/models/actor.js by creating an actor schema as follows: // /src/models/actor.js var mongoose = require('mongoose'); var generateId = require('./plugins/generateId'); var actorSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, name: { type: String, required: true }, birth_year: { type: Number, required: true }, movies: [{ type : mongoose.Schema.ObjectId, ref : 'Movie' }] }); actorSchema.plugin(generateId()); module.exports = mongoose.model('Actor', actorSchema); Each actor has a unique id, a name, and a birth year. The entries also contain validators such as the type and boolean value that are required. The model is exported upon definition (module.exports), so that we can reuse it directly in the app. Alternatively, you could fetch each model through Mongoose using mongoose.model('Actor', actorSchema), but this would feel less explicitly coupled compared to our approach of directly requiring it. Similarly, we need a movie schema as well. We define the movie schema as follows: // /src/models/movies.js var movieSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, title: { type: String, required: true }, year: { type: Number, required: true }, actors: [{ type : mongoose.Schema.ObjectId, ref : 'Actor' }] }); movieSchema.plugin(generateId()); module.exports = mongoose.model('Movie', movieSchema); Generating unique IDs In both our movie and actor schemas, we used a plugin called generateId(). While MongoDB automatically generates ObjectID for each document using the _id field, we want to generate our own IDs that are more human readable and hence friendlier. We also would like to give the user the opportunity to select their own id of choice. However, being able to choose an id can cause conflicts. If you were to choose an id that already exists, your POST request would be rejected. We should autogenerate an ID if the user does not pass one explicitly. Without this plugin, if either an actor or a movie is created without an explicit ID passed along by the user, the server would complain since the ID is required. We can create middleware for Mongoose that assigns an id before we persist the object as follows: // /src/models/plugins/generateId.js module.exports = function() { return function generateId(schema){ schema.pre('validate',function(next, done) { var instance = this; var model = instance.model(instance.constructor.modelName); if( instance.id == null ) { model.findOne().sort("-id").exec(function(err,maxInstance) { if (err){ return done(err); } else { var maxId = maxInstance.id || 0; instance.id = maxId+1; done(); } }) } else { done(); } }) } }; There are a few important notes about this code. See what we did to get the var model? This makes the plugin generic so that it can be applied to multiple Mongoose schemas. Notice that there are two callbacks available: next and done. The next variable passes the code to the next pre-validation middleware. That's something you would usually put at the bottom of the function right after you make your asynchronous call. This is generally a good thing since one of the advantages of asynchronous calls is that you can have many things running at the same time. However, in this case, we cannot call the next variable because it would conflict with our model definition of id required. Thus, we just stick to using the done variable when the logic is complete. Another concern arises due to the fact that MongoDB doesn't support transactions, which means you may have to account for this function failing in some edge cases. For example, if two calls to POST /actor happen at the same time, they will both have their IDs auto incremented to the same value. Now that we have the code for our generateId() plugin, we require it in our actor and movie schema as follows: var generateId = require('./plugins/generateId'); actorSchema.plugin(generateId()); Validating your database Each key in the Mongoose schema defines a property that is associated with a SchemaType. For example, in our actors.js schema, the actor's name key is associated with a string SchemaType. String, number, date, buffer, boolean, mixed, objectId, and array are all valid schema types. In addition to schema types, numbers have min and max validators and strings have enum and match validators. Validation occurs when a document is being saved (.save()) and will return an error object, containing type, path, and value properties, if the validation has failed. Extracting functions to reusable middleware We can use our anonymous or named functions as middleware. To do so, we would export our functions by calling module.exports in routes/actors.js and routes/movies.js: Let's take a look at our routes/actors.js file. At the top of this file, we require the Mongoose schemas we defined before: var Actor = require('../models/actor'); This allows our variable actor to access our MongoDB using mongo functions such as find(), create(), and update(). It will follow the schema defined in the file /models/actor. Since actors are in movies, we will also need to require the Movie schema to show this relationship by the following. var Movie = require('../models/movie'); Now that we have our schema, we can begin defining the logic for the functions we described in endpoints. For example, the endpoint GET /actors/:id will retrieve the actor with the corresponding ID from our database. Let's call this function getOne(). It is defined as follows: getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, Here, we use the mongo findOne() method to retrieve the actor with id: req.params.id. There are no joins in MongoDB so we use the .populate() method to retrieve the movies the actor is in. The .populate() method will retrieve documents from a separate collection based on its ObjectId. This function will return a status 400 if something went wrong with our Mongoose driver, a status 404 if the actor with :id is not found, and finally, it will return a status 200 along with the JSON of the actor object if an actor is found. We define all the functions required for the actor endpoints in this file. The result is as follows: // /src/routes/actors.js var Actor = require('../models/actor'); var Movie = require('../models/movie'); module.exports = { getAll: function(req, res, next) { Actor.find(function(err, actors) { if (err) return res.status(400).json(err); res.status(200).json(actors); }); }, createOne: function(req, res, next) { Actor.create(req.body, function(err, actor) { if (err) return res.status(400).json(err); res.status(201).json(actor); }); }, getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, updateOne: function(req, res, next) { Actor.findOneAndUpdate({ id: req.params.id }, req.body,function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, deleteOne: function(req, res, next) { Actor.findOneAndRemove({ id: req.params.id }, function(err) { if (err) return res.status(400).json(err); res.status(204).json(); }); }, addMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); Movie.findOne({ id: req.body.id }, function(err, movie) { if (err) return res.status(400).json(err); if (!movie) return res.status(404).json(); actor.movies.push(movie); actor.save(function(err) { if (err) return res.status(500).json(err); res.status(201).json(actor); }); }) }); }, deleteMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); actor.movies = []; actor.save(function(err) { if (err) return res.status(400).json(err); res.status(204).json(actor); }) }); } }; For all of our movie endpoints, we need the same functions but applied to the movie collection. After exporting these two files, we require them in app.js (/src/lib/app.js) by simply adding: require('../routes/movies'); require('../routes/actors'); By exporting our functions as reusable middleware, we keep our code clean and can refer to functions in our CRUD calls in the /routes folder. Testing Mocha is used as the test framework along with should.js and supertest. Testing supertest lets you test your HTTP assertions and testing API endpoints. The tests are placed in the root folder /test. Tests are completely separate from any of the source code and are written to be readable in plain English, that is, you should be able to follow along with what is being tested just by reading through them. Well-written tests with good coverage can serve as a readme for its API, since it clearly describes the behavior of the entire app. The initial setup to test our movies API is the same for both /test/actors.js and /test/movies.js: var should = require('should'); var assert = require('assert'); var request = require('supertest'); var app = require('../src/lib/app'); In src/test/actors.js, we test the basic CRUD operations: creating a new actor object, retrieving, editing, and deleting the actor object. An example test for the creation of a new actor is shown as follows: describe('Actors', function() { describe('POST actor', function(){ it('should create an actor', function(done){ var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(201, done) }); We can see that the tests are readable in plain English. We create a new POST request for a new actor to the database with the id of 1, name of AxiomZen, and birth_year of 2012. Then, we send the request with the .send() function. Similar tests are present for GET and DELETE requests as given in the following code: describe('GET actor', function() { it('should retrieve actor from db', function(done){ request(app) .get('/actors/1') .expect(200, done); }); describe('DELETE actor', function() { it('should remove a actor', function(done) { request(app) .delete('/actors/1') .expect(204, done); }); }); To test our PUT request, we will edit the name and birth_year of our first actor as follows: describe('PUT actor', function() { it('should edit an actor', function(done) { var actor = { 'name': 'ZenAxiom', 'birth_year': '2011' }; request(app) .put('/actors/1') .send(actor) .expect(200, done); }); it('should have been edited', function(done) { request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.name.should.eql('ZenAxiom'); res.body.birth_year.should.eql(2011); done(); }); }); }); The first part of the test modifies the actor name and birth_year keys, sends a PUT request for /actors/1 (1 is the actors id), and then saves the new information to the database. The second part of the test checks whether the database entry for the actor with id 1 has been changed. The name and birth_year values are checked against their expected values using .should.eql(). In addition to performing CRUD actions on the actor object, we can also perform these actions to the movies we add to each actor (associated by the actor's ID). The following snippet shows a test to add a new movie to our first actor (with the id of 1): describe('POST /actors/:id/movies', function() { it('should successfully add a movie to the actor',function(done) { var movie = { 'id': '1', 'title': 'Hello World', 'year': '2013' } request(app) .post('/actors/1/movies') .send(movie) .expect(201, done) }); }); it('actor should have array of movies now', function(done){ request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.movies.should.eql(['1']); done(); }); }); }); The first part of the test creates a new movie object with id, title, and year keys, and sends a POST request to add the movies as an array to the actor with id of 1. The second part of the test sends a GET request to retrieve the actor with id of 1, which should now include an array with the new movie input. We can similarly delete the movie entries as illustrated in the actors.js test file: describe('DELETE /actors/:id/movies/:movie_id', function() { it('should successfully remove a movie from actor', function(done){ request(app) .delete('/actors/1/movies/1') .expect(200, done); }); it('actor should no longer have that movie id', function(done){ request(app) .get('/actors/1') .expect(201) .end(function(err, res) { res.body.movies.should.eql([]); done(); }); }); }); Again, this code snippet should look familiar to you. The first part tests that sending a DELETE request specifying the actor ID and movie ID will delete that movie entry. In the second part, we make sure that the entry no longer exists by submitting a GET request to view the actor's details where no movies should be listed. In addition to ensuring that the basic CRUD operations work, we also test our schema validations. The following code tests to make sure two actors with the same ID do not exist (IDs are specified as unique): it('should not allow you to create duplicate actors', function(done) { var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(400, done); }); We should expect code 400 (bad request) if we try to create an actor who already exists in the database. A similar set of tests is present for tests/movies.js. The function and outcome of each test should be evident now. Summary In this article, we created a basic API that connects to MongoDB and supports CRUD methods. You should now be able to set up an API complete with tests, for any data, not just movies and actors! We hope you found that this article has laid a good foundation for the Express and API setup. To learn more about Express.js, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Web Application Development with Express(https://www.packtpub.com/web-development/mastering-web-application-development-express) Advanced Express Web Application Development(https://www.packtpub.com/web-development/advanced-express-web-application-development) Resources for Article: Further resources on this subject: Metal API: Get closer to the bare metal with Metal API [Article] Building a Basic Express Site [Article] Introducing Sails.js [Article]
Read more
  • 0
  • 0
  • 16596

article-image-stack-overflow-confirms-production-systems-hacked
Vincy Davis
17 May 2019
2 min read
Save for later

Stack Overflow confirms production systems hacked

Vincy Davis
17 May 2019
2 min read
Almost after a week of the attack, Stack Overflow admitted in an official security update yesterday, that their production systems has been hacked. “Over the weekend, there was an attack on Stack Overflow. We have confirmed that some level of production access was gained on May 11”, said Mary Ferguson ,VP of Engineering at Stack Overflow. In this short update, the company has mentioned that they are investigating the extent of the access and are addressing all the known vulnerabilities. Though not confirmed, the company has identified no breach of customer or user data. https://twitter.com/gcluley/status/1129260135778607104 Some users are acknowledging the fact that that the firm has at least come forward and accepted the security violation. A user on Reddit said, “Wow. I'm glad they're letting us know early, but this sucks” There are other users who think that security breach due to hacking is very common nowadays. A user on Hacker News commented, “I think we've reached a point where it's safe to say that if you're using a service -any service - assume your data is breached (or willingly given) and accessible to some unknown third party. That third party can be the government, it can be some random marketer or it can be a malicious hacker. Just hope that you have nothing anywhere that may be of interest or value to anyone, anywhere. Good luck.” Few days ago, there were reports that Stack Overflow directly links to Facebook profile pictures. This means that the linking unintentionally allows user activity throughout Stack Exchange to be tracked by Facebook and also tracks the topics that the users are interested in. Read More: Facebook again, caught tracking Stack Overflow user activity and data Stack Overflow has also assured users that more information will be provided to them, once the company concludes the investigation. Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list 2019 Stack Overflow survey: A quick overview Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 16586
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-testing-and-quality-control
Packt
04 Jan 2017
19 min read
Save for later

Testing and Quality Control

Packt
04 Jan 2017
19 min read
In this article by Pablo Solar Vilariño and Carlos Pérez Sánchez, the author of the book, PHP Microservices, we will see the following topics: (For more resources related to this topic, see here.) Test-driven development Behavior-driven development Acceptance test-driven development Tools Test-driven development Test-Driven Development (TDD) is part of Agile philosophy, and it appears to solve the common developer's problem that shows when an application is evolving and growing, and the code is getting sick, so the developers fix the problems to make it run but every single line that we add can be a new bug or it can even break other functions. Test-driven development is a learning technique that helps the developer to learn about the domain problem of the application they are going to build, doing it in an iterative, incremental, and constructivist way: Iterative because the technique always repeats the same process to get the value Incremental because for each iteration, we have more unit tests to be used Constructivist because it is possible to test all we are developing during the process straight away, so we can get immediate feedback Also, when we finish developing each unit test or iteration, we can forget it because it will be kept from now on throughout the entire development process, helping us to remember the domain problem through the unit test; this is a good approach for forgetful developers. It is very important to understand that TDD includes four things: analysis, design, development, and testing; in other words, doing TDD is understanding the domain problem and correctly analyzing the problem, designing the application well, developing well, and testing it. It needs to be clear; TDD is not just about implementing unit tests, it is the whole process of software development. TDD perfectly matches projects based on microservices because using microservices in a large project is dividing it into little microservices or functionalities, and it is like an aggrupation of little projects connected by a communication channel. The project size is independent of using TDD because in this technique, you divide each functionality into little examples, and to do this, it does not matter if the project is big or small, and even less when our project is divided by microservices. Also, microservices are still better than a monolithic project because the functionalities for the unit tests are organized in microservices, and it will help the developers to know where they can begin using TDD. How to do TDD? Doing TDD is not difficult; we just need to follow some steps and repeat them by improving our code and checking that we did not break anything. TDD involves the following steps: Write the unit test: It needs to be the simplest and clearest test possible, and once it is done, it has to fail; this is mandatory. If it does not fail, there is something that we are not doing properly. Run the tests: If it has errors (it fails), this is the moment to develop the minimum code to pass the test, just what is necessary, do not code additional things. Once you develop the minimum code to pass the test, run the test again (step two); if it passes, go to the next step, if not then fix it and run the test again. Improve the test: If you think it is possible to improve the code you wrote, do it and run the tests again (step two). If you think it is perfect then write a new unit test (step one). To do TDD, it is necessary to write the tests before implementing the function; if the tests are written after the implementation has started, it is not TDD; it is just testing. If we start implementing the application without testing and it is finished, or if we start creating unit tests during the process, we are doing the classic testing and we are not approaching the TDD benefits. Developing the functions without prior testing, the abstract idea of the domain problem in your mind can be wrong or may even be clear at the start but during the development process it can change or the concepts can be mixed. Writing the tests after that, we are checking if all the ideas in our main were correct after we finished the implementation, so probably we have to change some methods or even whole functionalities after spend time coding. Obviously, testing is always better than not testing, but doing TDD is still better than just classic testing. Why should I use TDD? TDD is the answer to questions such as: Where shall I begin? How can I do it? How can I write code that can be modified without breaking anything? How can I know what I have to implement? The goal is not to write many unit tests without sense but to design it properly following the requirements. In TDD, we do not to think about implementing functions, but we think about good examples of functions related with the domain problem in order to remove the ambiguity created by the domain problem. In other words, by doing TDD, we should reproduce a specific function or case of use in X examples until we get the necessary examples to describe the function or task without ambiguity or misinterpretations. TDD can be the best way to document your application. Using other methodologies of software development, we start thinking about how the architecture is going to be, what pattern is going to be used, how the communication between microservices is going to be, and so on, but what happens if once we have all this planned, we realize that this is not necessary? How much time is going to pass until we realize that? How much effort and money are we going to spend? TDD defines the architecture of our application by creating little examples in many iterations until we realize what the architecture is; the examples will slowly show us the steps to follow in order to define what the best structures, patterns, or tools to use are, avoiding expenditure of resources during the firsts stages of our application. This does not mean that we are working without an architecture; obviously, we have to know if our application is going to be a website or a mobile app and use a proper framework. What is going to be the interoperability in the application? In our case it will be an application based on microservices, so it will give us support to start creating the first unit tests. The architectures that we remove are the architectures on top of the architecture, in other words, the guidelines to develop an application as always. TDD will produce an architecture without ambiguity from unit testing. TDD is not cure-all: In other words, it does not give the same results to a senior developer as to a junior developer, but it is useful for the entire team. Let's look at some advantages of using TDD: Code reuse: Creates every functionality with only the necessary code to pass the tests in the second stage (Green) and allows you to see if there are more functions using the same code structure or parts of a specific function, so it helps you to reuse the previous code you wrote. Teamwork is easier: It allows you to be confident with your team colleagues. Some architects or senior developers do not trust developers with poor experience, and they need to check their code before committing the changes, creating a bottleneck at that point, so TDD helps to trust developers with less experience. Increases communication between team colleagues: The communication is more fluent, so the team share their knowledge about the project reflected on the unit tests. Avoid overdesigning application in the first stages: As we said before, doing TDD allows you to have an overview of the application little by little, avoiding the creation of useless structures or patterns in your project, which, maybe, you will trash in the future stages. Unit tests are the best documentation: The best way to give a good point of view of a specific functionality is reading its unit test. It will help to understand how it works instead of human words. Allows discovering more use cases in the design stage: In every test you have to create, you will understand how the functionality should work better and all the possible stages that a functionality can have. Increases the feeling of a job well done: In every commit of your code, you will have the feeling that it was done properly because the rest of the unit tests passes without errors, so you will not be worried about other broken functionalities. Increases the software quality: During the step of refactoring, we spend our efforts on making the code more efficient and maintainable, checking that the whole project still works properly after the changes. TDD algorithm The technical concepts and steps to follow the TDD algorithm are easy and clear, and the proper way to make it happen improves by practicing it. There are only three steps, called red, green, and refactor: Red – Writing the unit tests It is possible to write a test even when the code is not written yet; you just need to think about whether it is possible to write a specification before implementing it. So, in this first step you should consider that the unit test you start writing is not like a unit test, but it is like an example or specification of the functionality. In TDD, this first example or specification is not immovable; in other words, the unit test can be modified in the future. Before starting to write the first unit test, it is necessary to think about how the Software Under Test (SUT) is going to be. We need to think about how the SUT code is going to be and how we would check that it works they way we want it to. The way that TDD works drives us to firstly design what is more comfortable and clear if it fits the requirements. Green – Make the code work Once the example is written, we have to code the minimum to make it pass the test; in other words, set the unit test to green. It does not matter if the code is ugly and not optimized, it will be our task in the next step and iterations. In this step, the important thing is only to write the necessary code for the requirements without unnecessary things. It does not mean writing without thinking about the functionality, but thinking about it to be efficient. It looks easy but you will realize that you will write extra code the first time. If you concentrate on this step, new questions will appear about the SUT behavior with different entries, but you should be strong and avoid writing extra code about other functionalities related to the current one. Instead of coding them, take notes to convert them into functionalities in the next iterations. Refactor – Eliminate redundancy Refactoring is not the same as rewriting code. You should be able to change the design without changing the behavior. In this step, you should remove the duplicity in your code and check if the code matches the principles of good practices, thinking about the efficiency, clarity, and future maintainability of the code. This part depends on the experience of each developer. The key to good refactoring is making it in small steps To refactor a functionality, the best way is to change a little part and then execute all the available tests; if they pass, continue with another little part, until you are happy with the obtained result. Behavior-driven development Behavior-Driven Development (BDD) is a process that broadens the TDD technique and mixes it with other design ideas and business analyses provided to the developers, in order to improve the software development. In BDD, we test the scenarios and classes’ behavior in order to meet the scenarios, which can be composed by many classes. It is very useful to use a DSL in order to have a common language to be used by the customer, project owner, business analyst, or developers. The goal is to have a ubiquitous language. What is BDD? As we said before, BDD is an AGILE technique based on TDD and ATDD, promoting the collaboration between the entire team of a project. The goal of BDD is that the entire team understands what the customer wants, and the customer knows what the rest of the team understood from their specifications. Most of the times, when a project starts, the developers don't have the same point of view as the customer, and during the development process the customer realizes that, maybe, they did not explain it or the developer did not understand it properly, so it adds more time to changing the code to meet the customer's needs. So, BDD is writing test cases in human language, using rules, or in a ubiquitous language, so the customer and developers can understand it. It also defines a DSL for the tests. How does it work? It is necessary to define the features as user stories (we will explain what this is in the ATDD section of this article) and their acceptance criteria. Once the user story is defined, we have to focus on the possible scenarios, which describe the project behavior for a concrete user or a situation using DSL. The steps are: Given [context], When [event occurs], Then [Outcome]. To sum up, the defined scenario for a user story gives the acceptance criteria to check if the feature is done. Acceptance Test-Driven Development Perhaps, the most important methodology in a project is the Acceptance Test-Driven Development (ATDD) or Story Test-Driven Development (STDD); it is TDD but on a different level. The acceptance (or customer) tests are the written criteria for a project meeting the business requirements that the customer demands. They are examples (like the examples in TDD) written by the project owner. It is the start of development for each iteration, the bridge between Scrum and agile development. In ATDD, we start the implementation of our project in a way different from the traditional methodologies. The business requirements written in human language are replaced by executables agreed upon by some team members and also the customer. It is not about replacing the whole documentation, but only a part of the requirements. The advantages of using ATDD are the following: Real examples and a common language for the entire team to understand the domain It allows identifying the domain rules properly It is possible to know if a user story is finished in each iteration The workflow works from the first steps The development does not start until the tests are defined and accepted by the team ATDD algorithm The algorithm of ATDD is like that of TDD but reaches more people than only the developers; in other words, doing ATDD, the tests of each story are written in a meeting that includes the project owners, developers, and QA technicians because the entire team must understand what is necessary to do and why it is necessary, so they can see if it is what the code should do. The ATDD cycle is depicted in the following diagram: Discuss The starting point of the ATDD algorithm is the discussion. In this first step, the business has a meeting with the customer to clarify how the application should work, and the analyst should create the user stories from that conversation. Also, they should be able to explain the conditions of satisfaction of every user story in order to be translated into examples. By the end of the meeting, the examples should be clear and concise, so we can get a list of examples of user stories in order to cover all the needs of the customer, reviewed and understood for him. Also, the entire team will have a project overview in order to understand the business value of the user story, and in case the user story is too big, it could be divided into little user stories, getting the first one for the first iteration of this process. Distill High-level acceptance tests are written by the customer and the development team. In this step, the writing of the test cases that we got from the examples in the discussion step begins, and the entire team can take part in the discussion and help clarify the information or specify the real needs of that. The tests should cover all the examples that were discovered in the discussion step, and extra tests could be added during this process bit by bit till we understand the functionality better. At the end of this step, we will obtain the necessary tests written in human language, so the entire team (including the customer) can understand what they are going to do in the next step. These tests can be used like a documentation. Develop In this step, the development of acceptance test cases is begun by the development team and the project owner. The methodology to follow in this step is the same as TDD, the developers should create a test and watch it fail (Red) and then develop the minimum amount of lines to pass (Green). Once the acceptance tests are green, this should be verified and tested to be ready to be delivered. During this process, the developers may find new scenarios that need to be added into the tests or even if it needs a large amount of work, it could be pushed to the user story. At the end of this step, we will have software that passes the acceptance tests and maybe more comprehensive tests. Demo The created functionality is shown by running the acceptance test cases and manually exploring the features of the new functionality. After the demonstration, the team discusses whether the user story was done properly and it meets the product owner's needs and decides if it can continue with the next story. Tools After knowing more about TDD and BDD, it is time to explain a few tools you can use in your development workflow. There are a lot of tools available, but we will only explain the most used ones. Composer Composer is a PHP tool used to manage software dependencies. You only need to declare the libraries needed by your project and the composer will manage them, installing and updating when necessary. This tool has only a few requirements: if you have PHP 5.3.2+, you are ready to go. In the case of a missing requirement, the composer will warn you. You could install this dependency manager on your development machine, but since we are using Docker, we are going to install it directly on our PHP-FPM containers. The installation of composer in Docker is very easy; you only need to add the following rule to the Dockerfile: RUN curl -sS https://getcomposer.org/installer | php -- --install-"dir=/usr/bin/ --filename=composer PHPUnit Another tool we need for our project is PHPUnit, a unit test framework. As before, we will be adding this tool to our PHP-FPM containers to keep our development machine clean. If you are wondering why we are not installing anything on our development machine except for Docker, the response is clear. Having everything in the containers will help you avoid any conflict with other projects and gives you the flexibility of changing versions without being too worried. Add the following RUN command to your PHP-FPM Dockerfile, and you will have the latest PHPUnit version installed and ready to use: RUN curl -sSL https://phar.phpunit.de/phpunit.phar -o "/usr/bin/phpunit && chmod +x /usr/bin/phpunit Now that we have all our requirements too, it is time to install our PHP framework and start doing some TDD stuff. Later, we will continue updating our Docker environment with new tools. We choose Lumen for our example. Please feel free to adapt all the examples to your favorite framework. Our source code will be living inside our containers, but at this point of development, we do not want immutable containers. We want every change we make to our code to be available instantaneously in our containers, so we will be using a container as a storage volume. To create a container with our source and use it as a storage volume, we only need to edit our docker-compose.yml and create one source container per each microservice, as follows: source_battle: image: nginx:stable volumes: - ../source/battle:/var/www/html command: "true" The above piece of code creates a container image named source_battle, and it stores our battle source (located at ../source/battle from the docker-compose.yml current path). Once we have our source container available, we can edit each one of our services and assign a volume. For instance, we can add the following line in our microservice_battle_fpm and microservice_battle_nginx container descriptions: volumes_from: - source_battle Our battle source will be available in our source container in the path, /var/www/html, and the remaining step to install Lumen is to do a simple composer execution. First, you need to be sure that your infrastructure is up with a simple command, as follows: $ docker-compose up The preceding command spins up our containers and outputs the log to the standard IO. Now that we are sure that everything is up and running, we need to enter in our PHP-FPM containers and install Lumen. If you need to know the names assigned to each one of your containers, you can do a $ docker ps and copy the container name. As an example, we are going to enter the battle PHP-FPM container with the following command: $ docker exec -it docker_microservice_battle_fpm_1 /bin/bash The preceding command opens an interactive shell in your container, so you can do anything you want; let's install Lumen with a single command: # cd /var/www/html && composer create-project --prefer-dist "laravel/lumen . Repeat the preceding commands for each one of your microservices. Now, you have everything ready to start doing Unit tests and coding your application. Summary In this article, you learned about test-driven development, behavior-driven development, acceptance test-driven development, and PHPUnit. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [Article] Understanding PHP basics [Article] The Multi-Table Query Generator using phpMyAdmin and MySQL [Article]
Read more
  • 0
  • 0
  • 16582

article-image-how-to-create-flappy-bird-clone-with-melonjs
Ellison Leao
26 Sep 2014
18 min read
Save for later

How to Create a Flappy Bird Clone with MelonJS

Ellison Leao
26 Sep 2014
18 min read
How to create a Flappy Bird clone using MelonJS Web game frameworks such as MelonJS are becoming more popular every day. In this post I will show you how easy it is to create a Flappy Bird clone game using the MelonJS bag of tricks. I will assume that you have some experience with JavaScript and that you have visited the melonJS official page. All of the code shown in this post is available on this GitHub repository. Step 1 - Organization A MelonJS game can be divided into three basic objects: Scene objects: Define all of the game scenes (Play, Menus, Game Over, High Score, and so on) Game entities: Add all of the stuff that interacts on the game (Players, enemies, collectables, and so on) Hud entities: All of the HUD objects to be inserted on the scenes (Life, Score, Pause buttons, and so on) For our Flappy Bird game, first create a directory, flappybirdgame, on your machine. Then create the following structure: flabbybirdgame | |--js |--|--entities |--|--screens |--|--game.js |--|--resources.js |--data |--|--img |--|--bgm |--|--sfx |--lib |--index.html Just a quick explanation about the folders: The js contains all of the game source. The entities folder will handle the HUD and the Game entities. In the screen folder, we will create all of the scene files. The game.js is the main game file. It will initialize all of the game resources, which is created in the resources.js file, the input, and the loading of the first scene. The data folder is where all of the assets, sounds, and game themes are inserted. I divided the folders into img for images (backgrounds, player atlas, menus, and so on), bgm for background music files (we need to provide a .ogg and .mp3 file for each sound if we want full compatibility with all browsers) and sfx for sound effects. In the lib folder we will add the current 1.0.2 version of MelonJS. Lastly, an index.html file is used to build the canvas. Step 2 - Implementation First we will build the game.js file: var game = { data: { score : 0, steps: 0, start: false, newHiScore: false, muted: false }, "onload": function() { if (!me.video.init("screen", 900, 600, true, 'auto')) { alert("Your browser does not support HTML5 canvas."); return; } me.audio.init("mp3,ogg"); me.loader.onload = this.loaded.bind(this); me.loader.preload(game.resources); me.state.change(me.state.LOADING); }, "loaded": function() { me.state.set(me.state.MENU, new game.TitleScreen()); me.state.set(me.state.PLAY, new game.PlayScreen()); me.state.set(me.state.GAME_OVER, new game.GameOverScreen()); me.input.bindKey(me.input.KEY.SPACE, "fly", true); me.input.bindKey(me.input.KEY.M, "mute", true); me.input.bindPointer(me.input.KEY.SPACE); me.pool.register("clumsy", BirdEntity); me.pool.register("pipe", PipeEntity, true); me.pool.register("hit", HitEntity, true); // in melonJS 1.0.0, viewport size is set to Infinity by default me.game.viewport.setBounds(0, 0, 900, 600); me.state.change(me.state.MENU); } }; The game.js is divided into: data object: This global object will handle all of the global variables that will be used on the game. For our game we will use score to record the player score, and steps to record how far the bird goes. The other variables are flags that we are using to control some game states. onload method: This method preloads the resources and initializes the canvas screen and then calls the loaded method when it's done. loaded method: This method first creates and puts into the state stack the screens that we will use on the game. We will use the implementation for these screens later on. It enables all of the input keys to handle the game. For our game we will be using the space and left mouse keys to control the bird and the M key to mute sound. It also adds the game entities BirdEntity, PipeEntity and the HitEntity in the game poll. I will explain the entities later. Then you need to create the resource.js file: game.resources = [ {name: "bg", type:"image", src: "data/img/bg.png"}, {name: "clumsy", type:"image", src: "data/img/clumsy.png"}, {name: "pipe", type:"image", src: "data/img/pipe.png"}, {name: "logo", type:"image", src: "data/img/logo.png"}, {name: "ground", type:"image", src: "data/img/ground.png"}, {name: "gameover", type:"image", src: "data/img/gameover.png"}, {name: "gameoverbg", type:"image", src: "data/img/gameoverbg.png"}, {name: "hit", type:"image", src: "data/img/hit.png"}, {name: "getready", type:"image", src: "data/img/getready.png"}, {name: "new", type:"image", src: "data/img/new.png"}, {name: "share", type:"image", src: "data/img/share.png"}, {name: "tweet", type:"image", src: "data/img/tweet.png"}, {name: "leader", type:"image", src: "data/img/leader.png"}, {name: "theme", type: "audio", src: "data/bgm/"}, {name: "hit", type: "audio", src: "data/sfx/"}, {name: "lose", type: "audio", src: "data/sfx/"}, {name: "wing", type: "audio", src: "data/sfx/"}, ]; Now let's create the game entities. First the HUD elements: create a HUD.js file in the entities folder. In this file you will create: A score entity A background layer entity The share buttons entities (Facebook, Twitter, and so on) game.HUD = game.HUD || {}; game.HUD.Container = me.ObjectContainer.extend({ init: function() { // call the constructor this.parent(); // persistent across level change this.isPersistent = true; // non collidable this.collidable = false; // make sure our object is always draw first this.z = Infinity; // give a name this.name = "HUD"; // add our child score object at the top left corner this.addChild(new game.HUD.ScoreItem(5, 5)); } }); game.HUD.ScoreItem = me.Renderable.extend({ init: function(x, y) { // call the parent constructor // (size does not matter here) this.parent(new me.Vector2d(x, y), 10, 10); // local copy of the global score this.stepsFont = new me.Font('gamefont', 80, '#000', 'center'); // make sure we use screen coordinates this.floating = true; }, update: function() { return true; }, draw: function (context) { if (game.data.start && me.state.isCurrent(me.state.PLAY)) this.stepsFont.draw(context, game.data.steps, me.video.getWidth()/2, 10); } }); var BackgroundLayer = me.ImageLayer.extend({ init: function(image, z, speed) { name = image; width = 900; height = 600; ratio = 1; // call parent constructor this.parent(name, width, height, image, z, ratio); }, update: function() { if (me.input.isKeyPressed('mute')) { game.data.muted = !game.data.muted; if (game.data.muted){ me.audio.disable(); }else{ me.audio.enable(); } } return true; } }); var Share = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "share"; settings.spritewidth = 150; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; FB.ui( { method: 'feed', name: 'My Clumsy Bird Score!', caption: "Share to your friends", description: ( shareText ), link: url, picture: 'http://ellisonleao.github.io/clumsy-bird/data/img/clumsy.png' } ); return false; } }); var Tweet = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "tweet"; settings.spritewidth = 152; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; var hashtags = 'clumsybird,melonjs' window.open('https://twitter.com/intent/tweet?text=' + shareText + '&hashtags=' + hashtags + '&count=' + url + '&url=' + url, 'Tweet!', 'height=300,width=400') return false; } }); You should notice that there are different me classes for different types of entities. The ScoreItem is a Renderable object that is created under an ObjectContainer and it will render the game steps on the play screen that we will create later. The share and Tweet buttons are created with the GUI_Object class. This class implements the onClick event that handles click events used to create the share events. The BackgroundLayer is a particular object created using the ImageLayer class. This class controls some generic image layers that can be used in the game. In our particular case we are just using a single fixed image, with fixed ratio and no scrolling. Now to the game entities. For this game we will need: BirdEntity: The bird and its behavior PipeEntity: The pipe object HitEntity: A invisible entity just to get the steps counting PipeGenerator: Will handle the PipeEntity creation Ground: A entity for the ground TheGround: The animated ground Container Add an entities.js file into the entities folder: var BirdEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('clumsy'); settings.width = 85; settings.height = 60; settings.spritewidth = 85; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0.2; this.gravityForce = 0.01; this.maxAngleRotation = Number.prototype.degToRad(30); this.maxAngleRotationDown = Number.prototype.degToRad(90); this.renderable.addAnimation("flying", [0, 1, 2]); this.renderable.addAnimation("idle", [0]); this.renderable.setCurrentAnimation("flying"); this.animationController = 0; // manually add a rectangular collision shape this.addShape(new me.Rect(new me.Vector2d(5, 5), 70, 50)); // a tween object for the flying physic effect this.flyTween = new me.Tween(this.pos); this.flyTween.easing(me.Tween.Easing.Exponential.InOut); }, update: function(dt) { // mechanics if (game.data.start) { if (me.input.isKeyPressed('fly')) { me.audio.play('wing'); this.gravityForce = 0.01; var currentPos = this.pos.y; // stop the previous one this.flyTween.stop() this.flyTween.to({y: currentPos - 72}, 100); this.flyTween.start(); this.renderable.angle = -this.maxAngleRotation; } else { this.gravityForce += 0.2; this.pos.y += me.timer.tick * this.gravityForce; this.renderable.angle += Number.prototype.degToRad(3) * me.timer.tick; if (this.renderable.angle > this.maxAngleRotationDown) this.renderable.angle = this.maxAngleRotationDown; } } var res = me.game.world.collide(this); if (res) { if (res.obj.type != 'hit') { me.device.vibrate(500); me.state.change(me.state.GAME_OVER); return false; } // remove the hit box me.game.world.removeChildNow(res.obj); // the give dt parameter to the update function // give the time in ms since last frame // use it instead ? game.data.steps++; me.audio.play('hit'); } else { var hitGround = me.game.viewport.height - (96 + 60); var hitSky = -80; // bird height + 20px if (this.pos.y >= hitGround || this.pos.y <= hitSky) { me.state.change(me.state.GAME_OVER); return false; } } return this.parent(dt); }, }); var PipeEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('pipe'); settings.width = 148; settings.height= 1664; settings.spritewidth = 148; settings.spriteheight= 1664; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; }, update: function(dt) { // mechanics this.pos.add(new me.Vector2d(-this.gravity * me.timer.tick, 0)); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var PipeGenerator = me.Renderable.extend({ init: function() { this.parent(new me.Vector2d(), me.game.viewport.width, me.game.viewport.height); this.alwaysUpdate = true; this.generate = 0; this.pipeFrequency = 92; this.pipeHoleSize = 1240; this.posX = me.game.viewport.width; }, update: function(dt) { if (this.generate++ % this.pipeFrequency == 0) { var posY = Number.prototype.random( me.video.getHeight() - 100, 200 ); var posY2 = posY - me.video.getHeight() - this.pipeHoleSize; var pipe1 = new me.pool.pull("pipe", this.posX, posY); var pipe2 = new me.pool.pull("pipe", this.posX, posY2); var hitPos = posY - 100; var hit = new me.pool.pull("hit", this.posX, hitPos); pipe1.renderable.flipY(); me.game.world.addChild(pipe1, 10); me.game.world.addChild(pipe2, 10); me.game.world.addChild(hit, 11); } return true; }, }); var HitEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('hit'); settings.width = 148; settings.height= 60; settings.spritewidth = 148; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; this.type = 'hit'; this.renderable.alpha = 0; this.ac = new me.Vector2d(-this.gravity, 0); }, update: function() { // mechanics this.pos.add(this.ac); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var Ground = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('ground'); settings.width = 900; settings.height= 96; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0; this.updateTime = false; this.accel = new me.Vector2d(-4, 0); }, update: function() { // mechanics this.pos.add(this.accel); if (this.pos.x < -this.renderable.width) { this.pos.x = me.video.getWidth() - 10; } return true; }, }); var TheGround = Object.extend({ init: function() { this.ground1 = new Ground(0, me.video.getHeight() - 96); this.ground2 = new Ground(me.video.getWidth(), me.video.getHeight() - 96); me.game.world.addChild(this.ground1, 11); me.game.world.addChild(this.ground2, 11); }, update: function () { return true; } }) Note that every game entity inherits from the me.ObjectEntity class. We need to pass the settings of the entity on the init method, telling it which image we will use from the resources along with the image measure. We also implement the update method for each Entity, telling it how it will behave during game time. Now we need to create our scenes. The game is divided into: TitleScreen PlayScreen GameOverScreen We will separate the scenes into js files. First create a title.js file in the screens folder: game.TitleScreen = me.ScreenObject.extend({ init: function(){ this.font = null; }, onResetEvent: function() { me.audio.stop("theme"); game.data.newHiScore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", true); me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.PLAY); } }); //logo var logoImg = me.loader.getImage('logo'); var logo = new me.SpriteObject ( me.game.viewport.width/2 - 170, -logoImg, logoImg ); me.game.world.addChild(logo, 10); var logoTween = new me.Tween(logo.pos).to({y: me.game.viewport.height/2 - 100}, 1000).easing(me.Tween.Easing.Exponential.InOut).start(); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); me.game.world.addChild(new (me.Renderable.extend ({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); //this.font = new me.Font('Arial Black', 20, 'black', 'left'); this.text = me.device.touch ? 'Tap to start' : 'PRESS SPACE OR CLICK LEFT MOUSE BUTTON TO START ntttttttttttPRESS "M" TO MUTE SOUND'; this.font = new me.Font('gamefont', 20, '#000'); }, update: function () { return true; }, draw: function (context) { var measure = this.font.measureText(context, this.text); this.font.draw(context, this.text, me.game.viewport.width/2 - measure.width/2, me.game.viewport.height/2 + 50); } })), 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); } }); Then, create a play.js file on the same folder: game.PlayScreen = me.ScreenObject.extend({ init: function() { me.audio.play("theme", true); // lower audio volume on firefox browser var vol = me.device.ua.contains("Firefox") ? 0.3 : 0.5; me.audio.setVolume(vol); this.parent(this); }, onResetEvent: function() { me.audio.stop("theme"); if (!game.data.muted){ me.audio.play("theme", true); } me.input.bindKey(me.input.KEY.SPACE, "fly", true); game.data.score = 0; game.data.steps = 0; game.data.start = false; game.data.newHiscore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); this.HUD = new game.HUD.Container(); me.game.world.addChild(this.HUD); this.bird = me.pool.pull("clumsy", 60, me.game.viewport.height/2 - 100); me.game.world.addChild(this.bird, 10); //inputs me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.SPACE); this.getReady = new me.SpriteObject( me.video.getWidth()/2 - 200, me.video.getHeight()/2 - 100, me.loader.getImage('getready') ); me.game.world.addChild(this.getReady, 11); var fadeOut = new me.Tween(this.getReady).to({alpha: 0}, 2000) .easing(me.Tween.Easing.Linear.None) .onComplete(function() { game.data.start = true; me.game.world.addChild(new PipeGenerator(), 0); }).start(); }, onDestroyEvent: function() { me.audio.stopTrack('theme'); // free the stored instance this.HUD = null; this.bird = null; me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); } }); Finally, the gameover.js screen: game.GameOverScreen = me.ScreenObject.extend({ init: function() { this.savedData = null; this.handler = null; }, onResetEvent: function() { me.audio.play("lose"); //save section this.savedData = { score: game.data.score, steps: game.data.steps }; me.save.add(this.savedData); // clay.io if (game.data.score > 0) { me.plugin.clay.leaderboard('clumsy'); } if (!me.save.topSteps) me.save.add({topSteps: game.data.steps}); if (game.data.steps > me.save.topSteps) { me.save.topSteps = game.data.steps; game.data.newHiScore = true; } me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", false) me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.MENU); } }); var gImage = me.loader.getImage('gameover'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImage.width/2, me.video.getHeight()/2 - gImage.height/2 - 100, gImage ), 12); var gImageBoard = me.loader.getImage('gameoverbg'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImageBoard.width/2, me.video.getHeight()/2 - gImageBoard.height/2, gImageBoard ), 10); me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); // share button var buttonsHeight = me.video.getHeight() / 2 + 200; this.share = new Share(me.video.getWidth()/3 - 100, buttonsHeight); me.game.world.addChild(this.share, 12); //tweet button this.tweet = new Tweet(this.share.pos.x + 170, buttonsHeight); me.game.world.addChild(this.tweet, 12); //leaderboard button this.leader = new Leader(this.tweet.pos.x + 170, buttonsHeight); me.game.world.addChild(this.leader, 12); // add the dialog witht he game information if (game.data.newHiScore) { var newRect = new me.SpriteObject( 235, 355, me.loader.getImage('new') ); me.game.world.addChild(newRect, 12); } this.dialog = new (me.Renderable.extend({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); this.font = new me.Font('gamefont', 40, 'black', 'left'); this.steps = 'Steps: ' + game.data.steps.toString(); this.topSteps= 'Higher Step: ' + me.save.topSteps.toString(); }, update: function () { return true; }, draw: function (context) { var stepsText = this.font.measureText(context, this.steps); var topStepsText = this.font.measureText(context, this.topSteps); var scoreText = this.font.measureText(context, this.score); //steps this.font.draw( context, this.steps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 ); //top score this.font.draw( context, this.topSteps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 + 50 ); } })); me.game.world.addChild(this.dialog, 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); this.font = null; me.audio.stop("theme"); } });  Here is how the ScreenObjects works: First it calls the init constructor method for any variable initialization. onResetEvent is called next. This method will be called every time the scene is called. In our case the onResetEvent will add some objects to the game world stack. The onDestroyEvent acts like a garbage collector and unregisters bind events and removes some elements on the draw calls. Now, let's put it all together in the index.html file: <!DOCTYPE HTML> <html lang="en"> <head> <title>Clumsy Bird</title> </head> <body> <!-- the facebook init for the share button --> <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : '213642148840283', status : true, xfbml : true }); }; (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/pt_BR/all.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> <!-- Canvas placeholder --> <div id="screen"></div> <!-- melonJS Library --> <script type="text/javascript" src="lib/melonJS-1.0.2.js" ></script> <script type="text/javascript" src="js/entities/HUD.js" ></script> <script type="text/javascript" src="js/entities/entities.js" ></script> <script type="text/javascript" src="js/screens/title.js" ></script> <script type="text/javascript" src="js/screens/play.js" ></script> <script type="text/javascript" src="js/screens/gameover.js" ></script> </body> </html> Step 3 - Flying! To run our game we will need a web server of your choice. If you have Python installed, you can simply type the following in your shell: $python -m SimpleHTTPServer Then you can open your browser at http://localhost:8000. If all went well, you will see the title screen after it loads, like in the following image: I hope you enjoyed this post!  About this author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and is a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 13
  • 16577

article-image-wireshark-working-packet-streams
Packt
11 Mar 2013
3 min read
Save for later

Wireshark: Working with Packet Streams

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Working with Packet Streams While working on network capture, there can be multiple instances of network activities going on. Consider a small example where you are simultaneously browsing multiple websites through your browser. Several TCP data packets will be flowing across your network for all these multiple websites. So it becomes a bit tedious to track the data packets belonging to a particular stream or session. This is where Follow TCP stream comes into action. Now when you are visiting multiple websites, each site maintains its own stream of data packets. By using the Follow TCP stream option we can apply a filter that locates packets only specific to a particular stream. To view the complete stream, select your preferred TCP packet (for example, a GET or POST request). Right-clicking on it will bring up the option Follow TCP Stream. Once you click on Follow TCP Stream, you will notice that a new filter rule is applied to Wireshark and the main capture window reflects all those data packets that belong to that stream. This can be helpful in figuring out what different requests/responses have been generated through a particular session of network interaction. If you take a closer look at the filter rule applied once you follow a stream, you will see a rule similar to tcp.stream eq <Number>. Here Number reflects the stream number which has to be followed to get various data packets. An additional operation that can be carried out here is to save the data packets belonging to a particular stream. Once you have followed a particular stream, go to File | Save As. Then select Displayed to save only the packets belonging to the viewed stream. Similar to following the TCP stream, we also have the option to follow the UDP and SSL streams. The two options can be reached by selecting the particular protocol type (UDP or SSL) and right-clicking on it. The particular follow option will be highlighted according to the selected protocol. The Wireshark menu icons also provide some quick navigation options to migrate through the captured packets. These icons include: Go back in packet history (1): This option traces you back to the last analyzed/selected packet. Clicking on it multiple times keeps pushing you back to your selection history. Go forward in packet history (2): This option pushes you forward in the series of packet analysis. Go to last packet (5): This option jumps your selection to the last packet in your capture window.: This option is useful in directly going to a specific packet number. Go to the first packet (4): This option takes you to the first packet in your current display of the capture window. Go to last packet (5): This option jumps your selection to the last packet in your capture window. Summary In this article, we learned how to work with packet streams. Resources for Article : Further resources on this subject: BackTrack 5: Advanced WLAN Attacks [Article] BackTrack 5: Attacking the Client [Article] Debugging REST Web Services [Article]
Read more
  • 0
  • 0
  • 16575

article-image-supporting-hypervisors-opennebula
Packt
25 May 2012
7 min read
Save for later

Supporting hypervisors by OpenNebula

Packt
25 May 2012
7 min read
(For more resources on Open Source, see here.) A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend. All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster. Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows: A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let's say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem. You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis. The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft). Configuring hosts The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows: Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured. The frontend and host's hostname should be resolved by a local DNS or a shared/etc/hosts file. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password. Configure the shared network bridge that will be used by VM to get the physical network.   The oneadmin account and passwordless login Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands. If during the operating system install you did not create it, create a oneadmin user on the host by using the following command: youruser@host1 $ sudo adduser oneadmin You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend: oneadmin@front-end $ ssh-copy-id oneadmin@host1 Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command: oneadmin@front-end $ ssh oneadmin@host1 Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts. Verifying the SSH host fingerprints The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message: The authenticity of host host01 (192.168.254.2)can't be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)? Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks. For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts. In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed: Host* StrictHostKeyChecking no. If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Now you should be able to remotely connect from a node to another with your hostname using the following command: $ ssh oneadmin@kvm01 Configuring a simple DNS with dnsmasq If you do not have a local DNS and manually managing the plain host's file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network. The OpenNebula frontend may be a good place to install it. For an Ubuntu/Debian installation use the following command: $ sudo apt-get install dnsmasq The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following: # dnsmasq nameserver 127.0.0.1 # another local DNS nameserver 192.168.0.1 # ISP or public DNS nameserver 208.67.220.220 nameserver 208.67.222.222 The /etc/hosts configuration details will look similar to the following: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code: # ip where dnsmasq is installed nameserver 192.168.0.2 Now you should be able to remotely connect from a node to another using your plain hostname using the following command: $ ssh oneadmin@kvm01 When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq. Configuring sudo To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code: # /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command. Now add the oneadmin user to the sudo group with the following command: $ sudo adduser oneadmin sudo Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula. Configuring network bridges Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example: # The loopback network interface auto lo iface lo inet loopback # The primary network interface iface eth0 inet manual auto lan0 iface lan0 inet static bridge_ports eth0 bridge_stp off bridge_fd 0 address 192.168.66.97 netmask 255.255.255.0 gateway 192.168.66.1 dns-nameservers 192.168.66.1 You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.
Read more
  • 0
  • 1
  • 16566
article-image-sql-query-basics-sap-business-one
Packt
18 May 2011
7 min read
Save for later

SQL Query Basics in SAP Business One

Packt
18 May 2011
7 min read
  Mastering SQL Queries for SAP Business One Utilize the power of SQL queries to bring Business Intelligence to your small to medium-sized business Who can benefit from using SQL Queries in SAP Business One? There are many different groups of SAP Business One users who may need this tool. To my knowledge, there is no standard organization chart for Small and Midsized enterprises. Most of them are different. You may often find one person that handles more than one role. You may check the following list to see if anything applies to you: Do you need to check specific sales results over certain time periods, for certain areas or certain customers? Do you want to know who the top vendors from certain locations for certain materials are? Do you have dynamic updated version of your sales force performance in real time? Do you often check if approval procedures are exactly matching your expectations? Have you tried to start building your SQL query but could not get it done properly? Have you experienced writing SQL query but the results are not always correct or up to your expectations? Consultant If you are an SAP Business One consultant, you have probably mastered SQL query already. However, if that is not the case, this would be a great help to extend your consulting power. It will probably become a mandatory skill in the future that any SAP Business One consultant should be able to use SQL query. Developer If you are an SAP Business One add-on developer, these skills will be good additions to your capabilities. You may find this useful even in some other development work like coding or programming. Very often you need to embed SQL query to your codes to complete your Software Development Kit (SDK) project. SAP Business One end user If you are simply a normal SAP Business One end user, you may need this more. This is because SQL query usage is best applied for the companies who have SAP Business One live data. Only you as the end users know better than anyone else what you are looking for to make Business Intelligence a daily routine job. It is very important for you to have an ability to create a query report so that you can map your requirement by query in a timely manner. SQL query and related terms Before going into the details of SQL query, I would like to briefly introduce some basic database concepts because SQL is a database language for managing data in Relational Database Management Systems (RDBMS). RDBMS RDBMS is a Database Management System that is based on the relation model. Relational here is a key word for RDBMS. You will find that data is stored in the form of Tables and the relationship among the data is also stored in the form of tables for RDBMS. Table Table is a key component within a database. One table or a group of tables represent one kind of data. For example, table OSLP within SAP Business One holds all Sales Employee Data. Tables are two-dimensional data storage place holders. You need to be familiar with their usage and their relationships with each other. If you are familiar with Microsoft Excel, the worksheet in Excel is a kind of two-dimensional table. Table is also one of the most often used concepts. Relationships between each table may be more important than tables themselves because without relation, nothing could be of any value. One important function within SAP Business One is allowing User Defined Table (UDT). All UDTs start with "@". Field A field is the lowest unit holding data within a table. A table can have many fields. It is also called a column. Field and column are interchangeable. A table is comprised of records, and all records have the same structure with specific fields. One important concept in SAP Business One is User Defined Field (UDF). All UDFs start with U_. SQL SQL is often referred to as Structured Query Language. It is pronounced as S-Q-L or as the word "Sequel". There are many different revisions and extensions of SQL. The current revision is SQL: 2008, and the first major revision is SQL-92. Most of SQL extensions are built on top of SQL-92. T-SQL Since SAP Business One is built on Microsoft SQL Server database, SQL here means Transact-SQL or T-SQL in brief. It is a Microsoft's/Sybase's extension of general meaning for SQL. Subsets of SQL There are three main subsets of the SQL language: Data Control Language (DCL) Data Definition Language (DDL) Data Manipulation Language (DML) Each set of the SQL language has a special purpose: DCL is used to control access to data in a database such as to grant or revoke specified users' rights to perform specified tasks. DDL is used to define data structures such as to create, alter, or drop tables. DML is used to retrieve and manipulate data in the table such as to insert, delete, and update data. Select, however, becomes a special statement belonging to this subset even though it is a read-only command that will not manipulate data at all. Query Query is the most common operation in SQL. It could refer to all three SQL subsets. You have to understand the risks of running any Add, Delete, or Update queries that could potentially alter system tables even if they are User Defined Fields. Only SELECT query is legitimate for SAP Business One system table. Data dictionary In order to create working SQL queries, you not only need to know how to write it, but also need to have a clear view regarding the relationship between tables and where to find the information required. As you know, SAP Business One is built on Microsoft SQL Server. Data dictionary is a great tool for creating SQL queries. Before we start, a good Data Dictionary is essential for the database. Fortunately, there is a very good reference called SAP Business One Database Tables Reference readily available through SAP Business One SDK help Centre. You can find the details in the following section. SAP Business One—Database tables reference The database tables reference file named REFDB.CHM is the one we are looking for. SDK is usually installed on the same server as the SAP Business One database server. Normally, the file path is: X:Program FilesSAPSAP Business One SDKHelp. Here, "X" means the drive where your SAP Business One SDK is installed. The help file looks like this: In this help file, we will find the same categories as the SAP Business One menu with all 11 modules. The tables related to each module are listed one by one. There are tree structures in the help file if the header tables have row tables. Each table provides a list of all the fields in the table along with their description, type, size, related tables, default value, and constraints. Naming convention of tables for SAP Business One To help you understand the previous mentioned data dictionary quickly, we will be going through the naming conventions for the table in SAP Business One. Three letter words Most tables for SAP Business One have four letters. The only exceptions are numberending tables, if the numbers are greater than nine. Those tables will have five letters. To understand table names easily, there is a three letter abbreviation in SAP Business One. Some of the commonly used abbreviations are listed as follows: ADM: Administration ATC: Attachments CPR: Contact Persons CRD: Business Partners DLN: Delivery Notes HEM: Employees INV: Sales Invoices ITM: Items ITT: Product Trees (Bill of Materials) OPR: Sales Opportunities PCH: Purchase Invoices PDN: Goods Receipt PO POR: Purchase Orders QUT: Sales Quotations RDR: Sales Orders RIN: Sales Credit Notes RPC: Purchase Credit Notes SLP: Sales Employees USR: Users WOR: Production Orders WTR: Stock Transfers  
Read more
  • 0
  • 1
  • 16555

article-image-art-android-development-using-android-studio
Packt
28 Oct 2015
5 min read
Save for later

The Art of Android Development Using Android Studio

Packt
28 Oct 2015
5 min read
 In this article by Mike van Drongelen, the author of the book Android Studio Cookbook, you will see why Android Studio is the number one IDE to develop Android apps. It is available for free for anyone who wants to develop professional Android apps. Android Studio is not just a stable and fast IDE (based on Jetbrains IntelliJ IDEA), it also comes with cool stuff such as Gradle, better refactoring methods, and a much better layout editor to name just a few of them. If you have been using Eclipse before, then you're going to love this IDE. Android Studio tip Want to refactor your code? Use the shortcut CTRL + T (for Windows: Ctrl + Alt + Shift + T) to see what options you have. You can, for example, rename a class or method or extract code from a method. Any type of Android app can be developed using Android Studio. Think of apps for phones, phablets, tablets, TVs, cars, glasses, and other wearables such as watches. Or consider an app that uses a cloud-base backend such as Parse or App Engine, a watch face app, or even a complete media center solution for TV. So, what is in the book? The sky is the limit, and the book will help you make the right choices while developing your apps. For example, on smaller screens, provide smart navigation and use fragments to make apps look great on a tablet too. Or, see how content providers can help you to manage and persist data and how to share data among applications. The observer pattern that comes with content providers will save you a lot of time. Android Studio tip Do you often need to return to a particular place in your code? Create a bookmark with Cmd + F3 (for Windows: F11). To display a list of bookmarks to choose from, use the shortcut: Cmd + F3 (for Windows: Shift + F11). Material design The book will also elaborate on material design. Create cool apps using CardView and RecycleView widgets. Find out how to create special effects and how to perform great transitions. A chapter is dedicated to the investigation of the Camera2 API and how to capture and preview photos. In addition, you will learn how to apply filters and how to share the results on Facebook. The following image is an example of one of the results: Android Studio tip Are you looking for something? Press Shift two times and start typing what you're searching for. Or to display all recent files, use the Cmd + E shortcut (for Windows: Ctrl + E). Quality and performance You will learn about patterns and how support annotations can help you improve the quality of your code. Testing your app is just as important as developing one, and it will take your app to the next level. Aim for a five-star rating in the Google Play Store later. The book shows you how to do unit testing based on jUnit or Robolectric and how to use code analysis tools such as Android Lint. You will learn about memory optimization using the Android Device Monitor, detect issues and learn how to fix them as shown in the following screenshot: Android Studio tip You can easily extract code from a method that has become too large. Just mark the code that you want to move and use the shortcut Cmd + Alt + M (for Windows: Ctrl + Alt + M). Having a physical Android device to test your apps is strongly recommended, but with thousands of Android devices being available, testing on real devices could be pretty expensive. Genymotion is a real, fast, and easy-to-use emulator and comes with many real-world device configurations. Did all your unit tests succeed? There are no more OutOfMemoryExceptions any more? No memory leaks found? Then it is about time to distribute your app to your beta testers. The final chapters explain how to configure your app for a beta release by creating the build types and build flavours that you need. Finally, distribute your app to your beta testers using Google Play to learn from their feedback. Did you know? Android Marshmallow (Android 6.0) introduces runtime permissions, which will change the way users give permission for an app. The book The art of Android development using Android Studio contains around 30 real-world recipes, clarifying all topics being discussed. It is a great start for programmers that have been using Eclipse for Android development before but is also suitable for new Android developers that know about the Java Syntax already. Summary The book nicely explains all the things you need to know to find your way in Android Studio and how to create high-quality and great looking apps. Resources for Article: Further resources on this subject: Introducing an Android platform [article] Testing with the Android SDK [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 16552

article-image-defining-rest-and-its-various-architectural-styles
Sugandha Lahoti
11 Jul 2019
9 min read
Save for later

Defining REST and its various architectural styles

Sugandha Lahoti
11 Jul 2019
9 min read
RESTful web services are services built according to REST principles. The idea is to have them designed to essentially work well on the web. But, what is REST? Let's start from the beginning by defining REST. This article is taken from the book Hands-On RESTful Web Services with TypeScript 3 by Biharck Muniz Araújo. This book is a  step-by-step guide that will help you design, develop, scale, and deploy RESTful APIs with TypeScript 3 and Node.js. In this article we will learn what is REST and talk about various REST architectural styles. What is REST? The REST (Representational State Transfer) style is a set of software engineering practices that contains constraints that should be used in order to create web services in distributed hypermedia systems. REST is not a tool and neither is it a language; in fact, REST is agnostic of protocols, components, and languages. It is important to say that REST is an architectural style and not a toolkit. REST provides a set of design rules in order to create stateless services that are shown as resources and, in some cases, sources of specific information such as data and functionality. The identification of each resource is performed by its unique Uniform Resource Identifier (URI). REST describes simple interfaces that transmit data over a standardized interface such as HTTP and HTTPS without any additional messaging layer, such as Simple Object Access Protocol (SOAP). The consumer will access REST resources via a URI using HTTP methods (this will be explained in more detail later). After the request, it is expected that a representation of the requested resource is returned. The representation of any resource is, in general, a document that reflects the current or intended state of the requested resource. REST architectural styles The REST architectural style describes six constraints. These constraints were originally described by Roy Fielding in his Ph.D. thesis. They include the following: Uniform interface Stateless Cacheable Client-server architecture A layered system Code on demand (optional) We will discuss them all minutely in the following subsections. Uniform interface Uniform interface is a constraint that describes a contract between clients and servers. One of the reasons to create an interface between them is to allow each part to evolve regardless of each other. Once there is a contract aligned with the client and server parts, they can start their works independently because, at the end of the day, the way that they will communicate is firmly based on the interface: The uniform interface is divided into four main groups, called principles: Resource-based The manipulation of resources using representations Self-descriptive messages Hypermedia as the Engine of Application State (HATEOAS) Let's talk more about them. Resource-based One of the key things when a resource is being modeled is the URI definition. The URI is what defines a resource as unique. This representation is what will be returned for clients. If you decided to perform GET to the offer URI, the resource that returns should be a resource representing an order containing the ID order, creation date, and so on. The representation should be in JSON or XML. Here is a JSON example: { id : 1234, creation-date : "1937-01-01T12:00:27.87+00:20", any-other-json-fields... } Here is an XML example: <order> <id>1234</id> <creation-date>1937-01-01T12:00:27.87+00:20</creation-date> any-other-xml-fields </order> The manipulation of resources using representations Following the happy path, when the client makes a request to the server, the server responds with a resource that represents the current state of its resource. This resource can be manipulated by the client. The client can request what kind it desires for the representation such as JSON, XML, or plain text. When the client needs to specify the representation, the HTTP Accept header is used. Here you can see an example in plain text: GET https://<HOST>/orders/12345 Accept: text/plain The next one is in JSON format: GET https://<HOST>/orders/12345 Accept: application/json Self-descriptive messages In general, the information provided by the RESTful service contains all the information about the resource that the client should be aware of. There is also a possibility of including more information than the resource itself. This information can be included as a link. In HTTP, it is used as the content-type header and the agreement needs to be bilateral—that is, the requestor needs to state the media type that it's waiting for and the receiver must agree about what the media type refers to. Some examples of media types are listed in the following table: Extension Document Type MIME type .aac AAC audio file audio/aac .arc Archive document application/octet-stream .avi Audio Video Interleave (AVI) video/x-msvideo .css Cascading Style Sheets (CSS) text/css .csv Comma-separated values (CSV) text/csv .doc Microsoft Word application/msword .epub Electronic publication (EPUB) application/epub+zip .gif Graphics Interchange Format (GIF) image/gif .html HyperText Markup Language (HTML) text/html .ico Icon format image/x-icon .ics iCalendar format text/calendar .jar Java Archive (JAR) application/java-archive .jpeg JPEG images image/jpeg .js JavaScript (ECMAScript) application/javascript .json JSON format application/json .mpeg MPEG video video/mpeg .mpkg Apple Installer Package application/vnd.apple.installer+xml .odt OpenDocument text document application/vnd.oasis.opendocument.text .oga OGG audio audio/ogg .ogv OGG video video/ogg .ogx OGG application/ogg .otf OpenType font font/otf .png Portable Network Graphics image/png .pdf Adobe Portable Document Format (PDF) application/pdf .ppt Microsoft PowerPoint application/vnd.ms-powerpoint .rar RAR archive application/x-rar-compressed .rtf Rich Text Format (RTF) application/rtf .sh Bourne shell script application/x-sh .svg Scalable Vector Graphics (SVG) image/svg+xml .tar Tape Archive (TAR) application/x-tar .ts TypeScript file application/typescript .ttf TrueType Font font/ttf .vsd Microsoft Visio application/vnd.visio .wav Waveform Audio Format audio/x-wav .zip ZIP archive application/zip .7z 7-zip archive application/x-7z-compressed There is also a possibility of creating custom media types. A complete list can be found here. HATEOAS HATEOAS is a way that the client can interact with the response by navigating within it through the hierarchy in order to get complementary information. For example, here the client makes a GET call to the order URI : GET https://<HOST>/orders/1234 The response comes with a navigation link to the items within the 1234 order, as in the following code block: { id : 1234, any-other-json-fields..., links": [ { "href": "1234/items", "rel": "items", "type" : "GET" } ] } What happens here is that the link fields allow the client to navigate until 1234/items in order to see all the items that belong to the 1234 order. Stateless Essentially, stateless means that the necessary state during the request is contained within the request and it is not persisted in any hypothesis that could be recovered further. Basically, the URI is the unique identifier to the destination and the body contains the state or changeable state, or the resource. In other words, after the server handles the request, the state could change and it will send back to the requestor with the appropriate HTTP status code: In comparison to the default session scope found in a lot of existing systems, the REST client must be the one that is responsible in providing all necessary information to the server, considering that the server should be idempotent. Stateless allows high scalability since the server will not maintain sessions. Another interesting point to note is that the load balancer does not care about sessions at all in stateless systems. In other words, the client needs to always pass the whole request in order to get the resource because the server is not allowed to hold any previous request state. Cacheable The aim of caching is to never have to generate the same response more than once. The key benefits of using this strategy are an increase in speed and a reduction in server processing. Essentially, the request flows through a cache or a series of caches, such as local caching, proxy caching, or reverse proxy caching, in front of the service hosting the resource. If any of them match with any criteria during the request (for example, the timestamp or client ID), the data is returned based on the cache layer, and if the caches cannot satisfy the request, the request goes to the server: Client-server architecture The REST style separates clients from a server. In short, whenever it is necessary to replace either the server or client side, things should flow naturally since there is no coupling between them. The client side should not care about data storage and the server side should not care about the interface at all: A layered system Each layer must work independently and interact only with the layers directly connected to it. This strategy allows passing the request without bypassing other layers. For instance, when scaling a service is desired, you might use a proxy working as a load balancer—that way, the incoming requests are deliverable to the appropriate server instance. That being the case, the client side does not need to understand how the server is going to work; it just makes requests to the same URI. The cache is another example that behaves in another layer, and the client does not need to understand how it works either: Code on demand In summary, this optional pattern allows the client to download and execute code from the server on the client side. The constraint says that this strategy improves scalability since the code can execute independently of the server on the client side: In this post, we discussed various REST architectural styles based on six constraints. To know more about best practices for RESTful design such as API endpoint organization, different ways to expose an API service, how to handle large datasets, check out the book Hands-On RESTful Web Services with TypeScript 3. 7 reasons to choose GraphQL APIs over REST for building your APIs Which Python framework is best for building RESTful APIs? Django or Flask? Understanding advanced patterns in RESTful API [Tutorial]
Read more
  • 0
  • 0
  • 16550
article-image-lukasz-langa-at-pylondinium19-if-python-stays-synonymous-with-cpython-for-too-long-well-be-in-big-trouble
Sugandha Lahoti
13 Aug 2019
7 min read
Save for later

Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”

Sugandha Lahoti
13 Aug 2019
7 min read
PyLondinium, the conference for Python developers was held in London, from the 14th to the 16th of June, 2019. At the Sunday Keynote Łukasz Langa, the creator of Black (Python code formatter) and Python 3.8 release manager spoke on where Python could be in 2020 and how Python developers should try new browser and mobile-friendly versions of Python. Python is an extremely expressive language, says Łukasz. “When I first started I was amazed how much you can accomplish with just a few lines of code especially compared to Java. But there are still languages that are even more expressive and enables even more compact notation.” So what makes Python special? Python is run above pseudocode; it reads like English; it is very elegant. “Our responsibility as developers,” Łukasz mentions “is to make Python’s runnable pseudocode convenient to use for new programmers.” Python has gotten much bigger, stable and more complex in the last decade. However, the most low-hanging fruit, Łukasz says, has already been picked up and what's left is the maintenance of an increasingly fossilizing interpreter and a stunted library. This maintenance is both tedious and tricky especially for a dynamic interpreter language like Python. Python being a community-run project is both a blessing and a curse Łukasz talks about how Python is the biggest community ran programming language on the planet. Other programming languages with similar or larger market penetration are either run by single corporations or multiple committees. Being a community project is both a blessing and a curse for Python, says Łukasz. It's a blessing because it's truly free from shareholder pressure and market swing. It’s a curse because almost the entire core developer team is volunteering their time and effort for free and the Python Software Foundation is graciously funding infrastructure and events; it does not currently employ any core developers. Since there is both Python and software right in the name of the foundation, Lukasz says he wants it to change. “If you don't pay people, you have no influence over what they work on. Core developers often choose problems to tackle based on what inspires them personally. So we never had an explicit roadmap on where Python should go and what problems or developers should focus on,” he adds. Python is no longer governed by a BDFL says Łukasz, “My personal hope is that the steering council will be providing visionary guidance from now on and will present us with an explicit roadmap on where we should go.” Interesting and dead projects in Python Łukasz talked about mypyc and invited people to work and contribute to this project as well as organizations to sponsor it. Mypyc is a compiler that compiles mypy-annotated, statically typed Python modules into CPython C extensions. This restricts the Python language to enable compilation. Mypyc supports a subset of Python. He also mentioned MicroPython, which is a Kickstarter-funded subset of Python optimized to run on microcontrollers and other constrained environments. It is a compatible runtime for microcontrollers that has very little memory- 16 kilobytes of RAM and 256 kilobytes for code memory and minimal computing power. He also talks about micro:bit. He also mentions many dead/dying/defunct projects for alternative Python interpreters, including Unladen Swallow, Pyston, IronPython. He talked about PyPy - the JIT Python compiler written in Python. Łukasz mentions that since it is written in Python 2, it makes it the most complex applications written in the industry. “This is at risk at the moment,” says Łukasz “since it’s a large Python 2 codebase needs updating to Python 3. Without a tremendous investment, it is very unlikely to ever migrate to Python 3.” Also, trying to replicate CPython quirks and bugs requires a lot of effort. Python should be aligned with where developer trends are shifting Łukasz believes that a stronger division between language and the reference implementation is important in case of Python. He declared, “If Python stays synonymous with CPython for too long, we’ll be in big trouble.” This is because CPython is not available where developer trends are shifting. For the web, the lingua franca is JavaScript now. For the two biggest operating systems on mobile, there is Swift the modern take on Objective C and Kotlin, the modern take on Java. For VR AR and 3D games, there is C# provided by Unity. While Python is growing fast, it’s not winning ground in two big areas: the browser, and mobile. Python is also slowly losing ground in the field of systems orchestration where Go is gaining traction. He adds, “if there were not the rise of machine learning and artificial intelligence, Python would have not survived the transition between Python 2 and Python 3.” Łukasz mentions how providing a clear supported and official option for the client-side web is what Python needs in order to satisfy the legion of people that want to use it.  He says, “for Python, the programming language to need to reach new heights we need a new kind of Python. One that caters to where developer trends are shifting - mobile, web, VR, AR, and 3D games. There should be more projects experimenting with Python for these platforms. This especially means trying restricted versions of the language because they are easier to optimize. We need a Python compiler for Web and Python on Mobile Łukasz talked about the need to shift to where developer trends are shifting. He says we need a Python compiler for the web - something that compiles your Python code to the web platform directly. He also adds, that to be viable for professional production use, Python on the web must not be orders of magnitude slower than the default option (Javascript) which is already better supported and has better documentation and training. Similarly, for mobile he wants a small Python application so that websites run fast and have quick user interactions. He gives the example of the Go programming language stating how “one of Go’s claims to fame is the fact that they shipped static binaries so you only have one file. You can choose to still use containers but it’s not necessary; you don't have virtual ends, you don't have pip installs, and you don't have environments that you have to orchestrate.” Łukasz further adds how the areas of modern focus where Python currently has no penetration don't require full compatibility with CPython. Starting out with a familiar subset of Python for the user that looks like Python would simplify the development of a new runtime or compiler a lot and potentially would even fit the target platform better. What if I want to work on CPython? Łukasz says that developers can still work on CPython if they want to. “I'm not saying that CPython is a dead end; it will forever be an important runtime for Python. New people are still both welcome and needed in fact. However, working on CPython today is different from working on it ten years ago; the runtime is mission-critical in many industries which is why developers must be extremely careful.” Łukasz sums his talk by declaring, “I strongly believe that enabling Python on new platforms is an important job. I'm not saying Python as the entire programming language should just abandon what it is now. I would prefer for us to be able to keep Python exactly as it is and just move it to all new platforms. Albeit, it is not possible without multi-million dollar investments over many years.” The talk was well appreciated by Twitter users with people lauding it as ‘fantastic’ and ‘enlightening’. https://twitter.com/WillingCarol/status/1156411772472971264 https://twitter.com/freakboy3742/status/1156365742435995648 https://twitter.com/jezdez/status/1156584209366081536 You can watch the full Keynote on YouTube. NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust
Read more
  • 0
  • 0
  • 16543

article-image-configuring-apache-and-nginx
Packt
19 Jul 2010
8 min read
Save for later

Configuring Apache and Nginx

Packt
19 Jul 2010
8 min read
(For more resources on Nginx, see here.) There are basically two main parts involved in the configuration, one relating to Apache and one relating to Nginx. Note that while we have chosen to describe the process for Apache in particular, this method can be applied to any other HTTP server. The only point that differs is the exact configuration sections and directives that you will have to edit. Otherwise, the principle of reverse-proxy can be applied, regardless of the server software you are using. Reconfiguring Apache There are two main aspects of your Apache configuration that will need to be edited in order to allow both Apache and Nginx to work together at the same time. But let us first clarify where we are coming from, and what we are going towards. Configuration overview At this point, you probably have the following architecture set up on your server: A web server application running on port 80, such as Apache A dynamic server-side script processing application such as PHP, communicating with your web server via CGI, FastCGI, or as a server module The new configuration that we are going towards will resemble the following: Nginx running on port 80 Apache or another web server running on a different port, accepting requests coming from local sockets only The script processing application configuration will remain unchanged As you can tell, only two main configuration changes will be applied to Apache as well as the other web server that you are running. Firstly, change the port number in order to avoid conflicts with Nginx, which will then be running as the frontend server. Secondly, (although this is optional) you may want to disallow requests coming from the outside and only allow requests forwarded by Nginx. Both configuration steps are detailed in the next sections. Resetting the port number Depending on how your web server was set up (manual build, automatic configuration from server panel managers such as cPanel, Plesk, and so on) you may find yourself with a lot of configuration files to edit. The main configuration file is often found in /etc/httpd/conf/ or /etc/apache2/, and there might be more depending on how your configuration is structured. Some server panel managers create extra configuration files for each virtual host. There are three main elements you need to replace in your Apache configuration: The Listen directive is set to listen on port 80 by default. You will have to replace that port by another such as 8080. This directive is usually found in the main configuration file. You must make sure that the following configuration directive is present in the main configuration file: NameVirtualHost A.B.C.D:8080, where A.B.C.D is the IP address of the main network interface on which server communications go through. The port you just selected needs to be reported in all your virtual host configuration sections, as described below. The virtual host sections must be transformed from the following template <VirtualHost A.B.C.D:80> ServerName example.com ServerAlias www.example.com [...]</VirtualHost> to the following: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...]</VirtualHost> In this example, A.B.C.D is the IP address of the virtual host and example.com is the virtual host's name. The port must be edited on the first two lines. Accepting local requests only There are many ways you can restrict Apache to accept only local requests, denying access to the outside world. But first, why would you want to do that? As an extra layer positioned between the client and Apache, Nginx provides a certain comfort in terms of security. Visitors no longer have direct access to Apache, which decreases the potential risk regarding all security issues the web server may have. Globally, it's not necessarily a bad idea to only allow access to your frontend server. The first method consists of changing the listening network interface in the main configuration file. The Listen directive of Apache lets you specify a port, but also an IP address, although, by default, no IP address is selected resulting in communications coming from all interfaces. All you have to do is replace the Listen 8080 directive by Listen 127.0.0.1:8080; Apache should then only listen on the local IP address. If you do not host Apache on the same server, you will need to specify the IP address of the network interface that can communicate with the server hosting Nginx. The second alternative is to establish per-virtual-host restrictions: <VirtualHost A.B.C.D:8080> ServerName example.com:8080 ServerAlias www.example.com [...] Order deny,allow allow from 127.0.0.1 allow from 192.168.0.1 eny all</VirtualHost> Using the allow and deny Apache directives, you are able to restrict the allowed IP addresses accessing your virtual hosts. This allows for a finer configuration, which can be useful in case some of your websites cannot be fully served by Nginx. Once all your changes are done, don't forget to reload the server to make sure the new configuration is applied, such as service httpd reload or /etc/init.d/ httpd reload. Configuring Nginx There are only a couple of simple steps to establish a working configuration of Nginx, although it can be tweaked more accurately as seen in the next section. Enabling proxy options The first step is to enable proxying of requests from your location blocks. Since the proxy_pass directive cannot be placed at the http or server level, you need to include it in every single place that you want to be forwarded. Usually, a location / { fallback block suffices since it encompasses all requests, except those that match location blocks containing a break statement. Here is a simple example using a single static backend hosted on the same server: server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://127.0.0.1:8080; }} In the following example, we make use of an Upstream block allowing us to specify multiple servers: upstream apache { server 192.168.0.1:80; server 192.168.0.2:80; server 192.168.0.3:80 weight=2; server 192.168.0.4:80 backup;} server { server_name .example.com; root /home/example.com/www; [...] location / { proxy_pass http://apache; }} So far, with such a configuration, all requests are proxied to the backend server; we are now going to separate the content into two categories: Dynamic files: Files that require processing before being sent to the client, such as PHP, Perl, and Ruby scripts, will be served by Apache Static files: All other content that does not require additional processing, such as images, CSS files, static HTML files, and media, will be served directly by Nginx We thus have to separate the content somehow to be provided by either server. Separating content In order to establish this separation, we can simply use two different location blocks—one that will match the dynamic file extensions and another one encompassing all the other files. This example passes requests for .php files to the proxy: server { server_name .example.com; root /home/example.com/www; [...] location ~* .php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) proxy_pass http://127.0.0.1:8080; } location / { # Your other options here for static content # for example cache control, alias... expires 30d; }} This method, although simple, will cause trouble with websites using URL rewriting. Most Web 2.0 websites now use links that hide file extensions such as http://example.com/articles/us-economy-strengthens/; some even replace file extensions with links resembling the following: http://example.com/useconomy- strengthens.html. When building a reverse-proxy configuration, you have two options: Port your Apache rewrite rules to Nginx (usually found in the .htaccess file at the root of the website), in order for Nginx to know the actual file extension of the request and proxy it to Apache correctly. If you do not wish to port your Apache rewrite rules, the default behavior shown by Nginx is to return 404 errors for such requests. However, you can alter this behavior in multiple ways, for example, by handling 404 requests with the error_page directive or by testing the existence of files before serving them. Both solutions are detailed below. Here is an implementation of this mechanism, using the error_page directive : server { server_name .example.com; root /home/example.com/www; [...] location / { # Your static files are served here expires 30d; [...] # For 404 errors, submit the query to the @proxy # named location block error_page 404 @proxy; } location @proxy { proxy_pass http://127.0.0.1:8080; }} Alternatively, making use of the if directive from the Rewrite module: server { server_name .example.com; root /home/example.com/www; [...] location / { # If the requested file extension ends with .php, # forward the query to Apache if ($request_filename ~* .php.$) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # If the requested file does not exist, # forward the query to Apache if (!-f $request_filename) { break; # prevents further rewrites proxy_pass http://127.0.0.1:8080; } # Your static files are served here expires 30d; }} There is no real performance difference between both solutions, as they will transfer the same amount of requests to the backend server. You should work on porting your Apache rewrite rules to Nginx if you are looking to get optimal performance.
Read more
  • 0
  • 0
  • 16506
Modal Close icon
Modal Close icon