Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-unleashing-powers-lumion
Packt
12 Jun 2014
22 min read
Save for later

Unleashing the powers of Lumion

Packt
12 Jun 2014
22 min read
Lumion supports a direct import of SketchUp files, which means that we don't need to use any special format to have our 3D model in Lumion. But if you are working with modeling packages, such as 3ds Max, Maya, and Blender, you need to use a different approach by exporting a COLLADA or FBX file as these two are the best formats to work with Lumion. In particular situations, we may need to use our own animations. You may be aware that we can import basic animations in Lumion from 3D modeling packages such as 3ds Max. Lumion uses the flexibility of shortcuts to improve the control we have in the 3D world. Once we import a 3D model or even if we use a model from the Lumion library, we need to adjust the position, the orientation, and the scale of the 3D model. But keep in mind the importance of organizing your 3D world using layers as well. They are free and they will become very useful when we need to hide some objects to focus our attention on a specific detail or when we use the Hide layer and Show layer effect. At a certain point in our project, we will need to go back and undo a mistake or something that doesn't look as expected. Lumion offers you a very limited undo option. When working with Lumion, and in particular, when organizing our 3D world, and arranging and adjusting the 3D models, we might find the possibility of locking the 3D model's position to be useful. This helps us to avoid selecting and unintentionally moving other, already placed 3D models. From the beginning of our project with Lumion, it is very important for us to organize and categorize our 3D world. Sometimes we may not do this straightaway, and only after importing some 3D models and adding some content from the Lumion library we realize the need to organize our project in a better way. We can use layers and assign the existing 3D models to a new layer. Over the course of a project, it is very common to have certain 3D models updated, and we need to update those 3D models into our Lumion project. This can be a rather daunting task, taking into account that most of the time these updates in the 3D models happen after we have already assigned materials to the imported 3D model. Sometimes during the project, we may face a radical change in a 3D model we have already imported into the Lumion project. The worst scenario is reassigning all the materials to the new 3D model and perhaps relocating to the correct place. However, Lumion has an option to help us with this and to avoid reassigning all the materials or at least not all of them, and the 3D models will stay in exactly the same place. After importing a 3D model we created in our favorite 3D modeling package, it is likely that we want to enhance the look and the environment of the project by using additional 3D models. The lack of detail and content will definitely create a lifeless and dull image or video. Lumion will not only help you learn how to place content from the Lumion library, but also how you can discover what you need. Something indispensable for a smooth workflow in every project are the copy and paste tools. Imagine having to go to the import library to place the 3D model again and assigning a material every single time we need a 3D model. Lumion doesn't have a standard copy and paste tool that we can find in most software, but there is a way to emulate this feature. Lumion will help you copy a 3D model already present in your scene and avoid the trouble of going back to the Lumion library and placing a 3D model already present in your project. Removing or deleting a 3D model is a part of the process of any project. This can be particularly tricky when our 3D world is crowded with 3D models and there is a possibility of selecting and deleting the wrong 3D model. Nevertheless, Lumion does a great job in this area because it protects you from deleting something by mistake. All projects are different, and this typically brings in unique challenges. Sometimes, a building or the environment are really intricate, and this can cause difficulties when we are placing 3D models from the Lumion library. Lumion recognizes surfaces and will avoid intersecting them with any 3D model you want to place in your world. However, there are times when this feature may be in our way and cause difficulties when placing a 3D model. A project needs life, but placing dozens of models one by one is a massive task. Lumion helps us to populate our 3D world by providing the option to place more than one copy at a time. By means of a shortcut, we can place 10 copies of a 3D model. The world we live in is bursting with diversity and variety. Consequently, our eyes are incredible in picking up repetitions. Sometimes, even if we cannot explain why, we know something is wrong with a picture because it doesn't look natural. When we are working on a big project, such repetitions stand out almost immediately. We can use a feature in Lumion that gives us the ability to randomize the size of 3D models while placing them. With more than 2,000 models, we can say that Lumion has everything we need to use in our project. Although Lumion has predefined models, it doesn't mean that we can't modify some basic options. We can modify simple settings such as color and texture, but keep in mind that this doesn't mean we can change these settings in every single 3D model. In almost every project, we have some autonomy to place the 3D models and organize the 3D world. Nevertheless, there are times when we really need more accuracy than one can get with the mouse. Lumion's coordinate system can assist us with this task. Typically, we focus our attention on selecting separate 3D models so that we can make exact and accurate adjustments. Eventually, we will need to do some alterations and modifications to multiple 3D objects. Lumion shows you how you can do this, along with a practical application. While working with selections in Lumion, every time we make a selection and transform the 3D model, we need to choose the correct category. There are particular occasions when we need to select and manipulate 3D models that belong to different categories. Lumion can select 3D models from different categories in one go. Initially, we may find Lumion very restrictive in the way it works with the content placed in our project because we need to select the correct category every time we want to work with a 3D model. However, we can bypass these restrictions by using the option to select and move any 3D model in our world without selecting a category. As mentioned earlier, the world we live is full of diversity and randomness; however, almost on every project, there are some situations when we need to place the content in an orderly way. A quick example is when we need to place garden lamps along a path and they need to be spaced equally. This can be done in Lumion. Lumion is a unique application not only because of what we can do with an incredible quality, but also because we have features that initially may not seem beneficial at all until we work on a project where we see a practical application. One example is when we need to align the orientations of different 3D models, Lumion shows not only how to use it, but also how to apply it in a practical situation. While populating and arranging our project, there are times when a snapping tool comes in handy. We can always use the move and height tools to place the 3D model at the top or next to another 3D model, but Lumion allow us to snap multiple 3D models to the same position in an easy way. While placing content in our project, we are usually concerned about the location of the 3D model. However, later we realize that our project is too uniform, and this is easily spotted with plants, trees, flowers, and other objects. Instead of selecting an individual 3D model and manually rotating, relocating, and rescaling it to bring some variety to our project, Lumion helps us out with a fantastic feature to randomize the orientation, position, and scale of the 3D models. Even in the most perfect project, we can find variations in the terrains and in the building, it's natural that we find inclined surfaces. Rotate on model is a feature in Lumion that allows us to snap to the surface of other 3D models when we are moving a 3D model. We can use this to adjust a car on a slope or a book on a chair. While changing the 3D model's rotation, we can see that when the 3D model is getting closer to the 90 degree angle, it will snap automatically. This is fine, perhaps, in most cases, but there are times when we need to do some precise adjustments and this option can be helpful in our way. Lumion deactivates this feature temporarily. An initial tactic to sculpt and shape the terrain is to use the terrain brushes available with Lumion. Lumion is not an application like ZBrush, but it does well with the brushes provided to sculpt the terrain, and they are not difficult to master. Lumion explains how we can use them and some practical applications in real projects. We know how to use the five brushes to sculpt the terrain and the different results of each one of them. However, we are not limited to the standard values used in each brush because Lumion allows us to change two settings to help us sculpt the terrain. This control is useful when we need to add details at a small or large scale. Some projects don't require any specific terrain from us, but at the same time, we don't want to use a flat terrain. In the Terrain menu, we can find some tools that help us to quickly create mountains and modify other characteristics of our project. When we start a new project in Lumion, we can start using nine different presets. They sort of work as a shortcut to help us get the appearance we want for our project. Most of the time, we may use the Grass preset, but that doesn't mean we get stuck with the landscape presented. We know how we can sculpt the terrain, but we can do more than that in Lumion and see how we can completely change the aspect of the landscape. Although we have 20 presets to entirely change the look of the landscape, this doesn't mean that we cannot change any settings and actually paint the landscape. Lumion explores the Paint submenu and shows how we can use Lumion's textures to paint and change the landscape completely. Perhaps you don't want the trouble of sculpting the terrain using the tools offered in Lumion. Truth be told, in some situations, it is easier and more productive to model the terrain outside Lumion and import that terrain along with the building. Lumion has fantastic material that blends the terrain we imported with the landscape. Another solution to create accurate terrains is by means of a heightmap. A heightmap is a texture with stored values that can be used, in this case, by Lumion that translates this 2D information into a 3D terrain. Lumion will help you to see how you can import a heightmap and save the terrain you created in Lumion as a heightmap file. It is remarkable how little things, such as defining the Sun's direction, have the most considerable amount of impact on a project. Throughout the production process, we may need to adjust the Sun's direction to have a clear view of the project; however, in due course, we will get to a point where we will need to define the final orientation that we are going to use to produce a still image or a movie. Setting up the Sun's direction and height is one of the simplest tasks in Lumion; however, by only using the Weather menu, we can start feeling that there is a lack of control over these settings. Fear not though! Lumion offers an effect that provides assistance to control the Sun in a way that can make all the difference when producing a video. Lumion will help you comprehend not only how you can modify the settings for the Sun, but will also provide you with a practical example of how you can apply this feature in any project. A shadow can be defined as an area that is not or is only partially illuminated because an object is obstructing its source of illumination. It is true that we don't think that shadows are important and essential elements to create a good-looking scenario, but without them our project will be dull. Lumion is going to help us use the Shadow effect to tweak and correct the shadows in order to meet our requirements and the finishing look we want to accomplish. An additional aspect connected to the shadows in Lumion, and it happens in the real world too, is the influence of the sky over shadows. Taking a look at the shadows in any project, you can easily realize how a sunset or a midday scene can transform the color of the shadow. Lumion will show how to control and change the influence that skylight has on shadows. We have been working entirely with hard shadows. The Sun in our 3D world can produce these hard shadows, and they are called by this name because they have strong and well-defined edges with less transition between illumination and the shadow. Soft shadows can be produced by the Sun in certain circumstances, and the sky, likewise, can create these diffused shadows with soft edges. We can apply soft shadows to our project to enrich the final look. So far, we have been looking at how we can use the Weather menu to create an enjoyable environment for our exterior scenes. Lumion is also capable of producing beautiful interior scenes, but we need to work a little bit with the interior illumination before we can produce something that is presentable and eye-catching. For interior scenes, we can use the Global Illumination effect to improve the illumination that is either provided by the Sun or some artificial source of illumination. We can improve the interior look and illumination using Global Illumination. An element that can bring an extra touch to the final movie are the clouds. This is a component that does not always cross our mind when trying to attain a good-looking and realistic movie. Lumion provides us with a lot of freedom not only to change the appearance of the clouds, but also to animate and even create volume clouds to bring this fine-tuning to a different dimension. Fog is a natural phenomenon that can be added to our project, and it can really change a scene dramatically. With it, we can make a scene more mysterious or reproduce the haze where dust, smoke, and other dry particles obscure the clarity of the sky. With the Fog effect, we can achieve this and much more. The fact that we can add rain and snow, as easy as reading this sentence, really proves that Lumion is a powerful and versatile application. Adding rain or snow is something that we can easily achieve by adding two effects in the Photo or Movie mode. Lumion by default uses wind in any project you start. Once you add the first trees, you can see how they slowly move showing the effect of the wind. We can control the wind using the, yes you are right, the Foliage Wind effect. Another option to control the Sun is using a new feature introduced in Lumion Version 4. This option, that we can find under the Effects label, is called Sun study. It is an amazing feature that allows us to select any point on the planet and mimic the Sun in that location. However, there is much more that we can do with this effect. Lumion has more than 500 materials on hand and generally this is more than adequate. Still, Lumion is a very flexible application, and for this reason, you are not fixed with just these materials. We have the opportunity to use our own textures to create other materials. Lumion is not going to show you all the settings that you can use to tweak the material; instead, we are going to focus on how we can replace and adjust an outside texture. Making a 3D model invisible or hiding surfaces from our 3D model is something that we don't anticipate to do in every project. However, there are times when the Invisible material can be very handy, and Lumion demonstrates not only how to apply it, but also some specific situations that we may consider while using this material. Glass is essential in any project we work. From the glass in a complex window to a simple cup made of glass, this material has a deep impact on a still image and even more on a movie, helping us capture the light and reflections of the environment. Lumion has a glass material that can be used to create some of these materials. The Glass material can be manipulated to get a more realistic look. During the production, it is expected that we save the materials applied to the 3D models. This should be done for precaution purposes, in case something goes wrong or when we need to go back and forth with materials, without having to lose any settings. Lumion has an option to save the materials you applied to a 3D model and later on load them again, if necessary. There is another feature that Lumion is going to show you which will give you a chance to save a single material instead of material sets. There are different ways in which we can add water to our project. We can create an ocean as easily as reading this sentence, and the same is true when we need to add a body of water or create a swimming pool. However, what if we want to create a fountain? Let's go even further: can we create a river? Lumion will teach you how easy is to create a streaming water effect. Another beautiful and eye-catching effect is when we have glowing materials in our project. From light bulbs to TV screens, we can add an extra touch to our scene by using the Standard material to create this glow effect. Lumion will teach you not only how to add this glow, but also how you can use textures to produce interesting effects. After adding trees, bushes, and other plants, the next thing you should add to the project is some grass. Prior to Lumion Version 4, you had to be satisfied with the terrain's texture. We can also import some grass, though, the project would become very heavy. You can also add some grass from the Lumion library and adjust that grass in the best way possible. Now, Lumion provides an option to use realistic grass, bringing that whoa factor. Each material has a certain amount of reflection, and this is a setting that we can adjust in almost every material that we can find in Lumion. Taking into account that Lumion is a real-time application, it is natural that in some cases, the reflections don't meet our requirements in terms of accuracy. Lumion has an effect that we can apply to surfaces in our 3D model to improve these reflections. While applying and tweaking materials, you can cross a section in your 3D model where you can easily see some flickering. Although this should be avoided, there is an inbuilt setting in every Lumion material to correct this problem. Fire is a special effect that is accessible in Lumion and is one of those elements that can bring an ordinary scene to life. We may have a living room that per se is excellent with all the materials and light, but when we add fire to the fireplace, it completely transforms the room into a warm, comfortable, and welcoming living room. Alternatively, consider how the same living room can be changed to introduce a romantic scene when illuminated with candles and a fireplace. Lumion is aimed to help you apply fire and control it using the Edit properties menu. In addition to the solid and liquid elements in Lumion, we can find elements that we could label as non-solid. Lumion has a special section for these elements, opening with the smoke, passing all the way through dust, and then finishing with fog and water vapor. How we can place these elements in our project and a realistic application of some of them is what Lumion will teach you. Fountains have their distinctive place in Lumion, and there is a tab dedicated to different categories of fountains. We can separate this tab into two parts: standard fountains and fountains produced by a waterspray emitter. Lumion is aimed to show you where to find these fountains, how to place them in the scene, and also provide you with some useful applications. Falling leaves are an enjoyable extra touch that can improve our still image or movie. Nevertheless, like the preceding special effects, this one needs to be used in the correct amount. Too much can ruin the scene, making the viewer focus more on leaves passing through the screen than actually focus on the 3D model. We can add text to a movie or a still image using the Titles effect, but in some circumstances, we need a text element with some more flexibility. Sometimes, when working on a presentation of a project, we are required to show some additional information; this can be easily achieved using this fantastic feature available in Lumion. We can add text to our project in the Build mode. A clip plane is an object that can be added to a scene and later used in the Movie mode, and it can be animated to produce a kind of reveling effect. Initially, it may seem a little bit confusing, you will understand how to apply it to your scene and animate this plane. It is time to move from special effects, such as fire, smoke, and water that we can add to the scene and move towards effects that we can apply in both the Movie and Photo modes. Lumion gives you an overview of how general effects work, how you can stack them in either the Movie or Photo mode, and how you can control them. It is logical that all the effects available in Lumion are applied using either the Movie or Photo mode. The reason for this is that if all those effects were applied to the Build mode, they would have a massive impact on the performance of the viewport, slowing down our workflow. However, Lumion likes to provide you with the freedom needed to produce the best result possible, and in some situations, it could be useful to check the effects in the Build mode. Bloom is the halo effect caused principally by bright lights in the scene. In real world, the camera lenses we use can never focus perfectly, but this is not a problem under normal conditions. However, when there is an intensely bright light in the scene, these imperfections are perceptible and visible, and as a consequence, in the photo that we would shoot, the bright light will appear to bleed beyond its usual borders. Purple fringing, distortion, and blurred edges are a combination of errors called chromatic aberration. A simple explanation is that chromatic aberration happens when there is a failure on the part of lens to focus or bring all the wavelengths of colors to the same focal plane. As light travels through this lens, the different colors travel at different speeds and go to different places on the camera's sensor. With 3D cameras, this doesn't happen, but we can add chromatic aberration to our image or video, giving an extra touch of realism. The expression "color correction" has a fair number of different meanings. However, generally speaking, we can say that it is a means to repair problems with the color, and we do that by changing the color of a pixel to another color or by tweaking other settings. In Lumion, this means that we can use color correction to either achieve a certain look or to enhance the overall aspect and mood of an image or a movie. We can use this effect in Lumion and apply a few tips to help us not only to correct the color, but also perform some color grading.
Read more
  • 0
  • 0
  • 3454

article-image-anatomy-report-processor
Packt
11 Jun 2014
6 min read
Save for later

The anatomy of a report processor

Packt
11 Jun 2014
6 min read
(For more resources related to this topic, see here.) At its most basic, a Puppet report processor is a piece of Ruby code that is triggered every time a Puppet agent passes a report to the Puppet master. This piece of code is passed as a Ruby object that contains both the client report and metrics. Although the data is sent in a wire format, such as YAML or PSON, by the time a report processor is triggered, this data is turned into an object by Puppet. This code can simply provide reports, but we're not limited to that. With a little imagination, we can use Puppet report processors for everything from alerts through to the orchestration of events. For instance, using a report processor and a suitable SMS provider would make it easy for Puppet to send you an SMS alert every time a run fails, or alternatively, using a report processor, you could analyze the data to reveal trends in your changes and update a change management console. The best way to think of a report processor is that it is a means to trigger actions on the event of a change, rather than strictly a reporting tool. Puppet reports are written in plain old Ruby, and so you have access to the multitude of libraries available via the RubyGems repositories. This can make developing your plugins relatively simple, as half the time you will find that the heavy lifting has been done for you by some enterprising fellow who has already solved your problem and published his code in a gem. Good examples of this can be found if you need to interoperate with another product such as MySQL, Oracle, Salesforce, and so on. A brief search on the Internet will bring up three or four examples of libraries that will offer this functionality within a few lines of code. Not having to produce the plumbing of a solution will both save time and generally produce fewer bugs. Creating a basic report processor Let's take a look at an incredibly simple report processor example. In the event that a Puppet agent fails to run, the following code will take the incoming data and create a little text file with a short message detailing which host had the problem: include puppet Puppet::Reports::register_report(:myfirstreport) do desc "My very first report!" def process if self.status == 'failed' msg = "failed puppet run for #{self.host} #{self.status} File.open('./tmp/puppetpanic.txt', 'w') { | f | f.write(msg)} end end end Although this code is basic, it contains all of the components required for a report processor. The first line includes the only mandatory library required: the Puppet library. This gives us access to several important methods that allow us to register and describe our report processor, and finally, a method to allow us to process our data. Registering your report processor The first method that every report processor must call is the Puppet::Reports::register_report method. This method can only take one argument, which is the name of the report processor. This name should be passed as a symbol and an alphanumeric title that starts with a letter (:report3 would be fine, but :3reports would not be). Try to avoid using any other characters—although you can potentially use underscores, the documentation is rather discouragingly vague on how valid this is and could well cause issues. Describing your report processor After we've called the Puppet::Reports::register_report method, we then need to call the desc method. The desc method is used to provide some brief documentation for what the report processor does and allows the use of Markdown formatting in the string. Processing your report The last method that every report processor must include is the process method. The process method is where we actually take our Puppet data and process it, and to make working with the report data easier, you have access to the .self object within the process method. The .self object is a Puppet::Transaction::Report object and gives you access to the Puppet report data. For example, to extract the hostname of the reporting host, we can use the self.host object. You can find the full details of what is contained in the Puppet::Transaction::Report object by visiting http://docs.puppetlabs.com/puppet/latest/reference/format_report.html. Let's go through our small example in detail and look at what it's doing. First of all, we include the Puppet library to ensure that we have access to the required methods. We then register our report by calling the Puppet::Reports::register_report(:myfirstreport) method and pass it the name of myfirstreport. Next, we add our desc method to tell users what this report is for. Finally, we have the process method, which is where we are going to place our code to process the report. For this example, we're going to keep it simple and simply check if the Puppet agent reported a successful run or not, and we do this by checking the Puppet status. This is described in the following code snippet: if self.status == 'failed' msg = "failed puppet run for #{self.host}#{self.status}" The transaction can produce one of three states: failed, changed, or unchanged. This is straightforward; a failed client run is any run that contains a resource that has a status of failed, a changed state is triggered when the client run contains a resource that has been given a status of changed, and the unchanged state occurs when a resource contains a value of out_of_sync; this generally happens if you run the Puppet client in noop (simulation) mode. Finally, we actually do something with the data. In the case of this very simple application, we're going to place the warning into a plain text file in the /tmp directory. This is described in the following code snippet: msg = "failed puppet run for #{self.host}" File.open('/tmp/puppetpanic.txt', 'w') { | f | f.write(msg)} As you can see, we're using basic string interpolation to take some of our report data and place it into the message. This is then written into a simple plain text file in the /tmp directory. Summary In this article, we have seen the anatomy of a report processor. We have also seen a basic Ruby code that sets up a simple report processor. Resources for Article: Further resources on this subject: Puppet: Integrating External Tools [Article] Quick start – Using the core Puppet resource types [Article] External Tools and the Puppet Ecosystem [Article]
Read more
  • 0
  • 0
  • 1452

article-image-building-web-application-php-and-mariadb-introduction-caching
Packt
11 Jun 2014
4 min read
Save for later

Building a Web Application with PHP and MariaDB - Introduction to caching

Packt
11 Jun 2014
4 min read
Let's begin with database caching. All the data for our application is stored on MariaDB. When a request is made for retrieving the list of available students, we run a query on our course_registry database. Running a single query at a time is simple but as the application gets popular, we will have more concurrent users. As the number of concurrent connections to the database increases, we will have to make sure that our database server is optimized to handle that load. In this section, we will look at the different types of caching that can be performed in the database. Let's start with query caching. Query caching is available by default on MariaDB; to verify if the installation has a query cache, we will use the have_query_cache global variable. Let's use the SHOW VARIABLES command to verify if the query cache is available on our MariaDB installation, as shown in the following screenshot: Now that we have a query cache, let's verify if it is active. To do this, we will use the query_cache_type global variable, shown as follows: From this query, we can verify that the query cache is turned on. Now, let's take a look at the memory that is allocated for the query cache by using the query_cache_size command, shown as follows: The query cache size is currently set to 64 MB; let's modify our query cache size to 128 MB. The following screenshot shows the usage of the SET GLOBAL syntax: We use the SET GLOBAL syntax to set the value for the query_cache_size command, and we verify this by reloading the value of the query_cache_size command. Now that we have the query cache turned on and working, let's look at a few statistics that would give us an idea of how often the queries are being cached. To retrieve this information, we will query the Qcache variable, as shown in the following screenshot: From this output, we can verify whether we are retrieving a lot of statistics about the query cache. One thing to verify is the Qcache_not_cached variable that is high for our database. This is due to the use of prepared statements. The prepared statements are not cached by MariaDB. Another important variable to keep an eye on is the Qcache_lowmem_prunes variable that will give us an idea of the number of queries that were deleted due to low memory. This will indicate that the query cache size has to be increased. From these stats, we understand that for as long as we use the prepared statements, our queries will not be cached on the database server. So, we should use a combination of prepared statements and raw SQL statements, depending on our use cases. Now that we understand a good bit about query caches, let's look at the other caches that MariaDB provides, such as the table open cache, the join cache, and the memory storage cache. The table open cache allows us to define the number of tables that can be left open by the server to allow faster look-ups. This will be very helpful where there is a huge number of requests for a table, and so the table need not be opened for every request. The join buffer cache is commonly used for queries that perform a full join, wherein there are no indexes to be used for finding rows for the next table. Normally, indexes help us avoid these problems. The memory storage cache, previously known as the heap cache, is commonly is used for read-only caches of data from other tables or for temporary work areas. Let's look at the variables that are with MariaDB, as shown in the following screenshot: Database caching is a very important step towards making our application scalable. However, it is important to understand when to cache, the correct caching techniques, and the size for each cache. Allocation of memory for caching has to be done very carefully as the application can run out of memory if too much space is allocated. A good method to allocate memory for caching is by running benchmarks to see how the queries perform, and have a list of popular queries that will run often so that we can begin by caching and optimizing the database for those queries. Now that we have a good understanding of database caching, let's proceed to application-level caching. Resources for Article: Introduction to Kohana PHP Framework Creating and Consuming Web Services in CakePHP 1.3 Installing MariaDB on Windows and Mac OS X
Read more
  • 0
  • 0
  • 11264

article-image-selecting-and-initializing-database
Packt
10 Jun 2014
7 min read
Save for later

Selecting and initializing the database

Packt
10 Jun 2014
7 min read
(For more resources related to this topic, see here.) In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" } We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } } It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } }); We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } } The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." } The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package.json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" } Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection; The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, consider the following code: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } }); The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if(err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } } The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Summary In this article, we saw how to select and initialize database using NoSQL with MongoDB and using MySQL required for writing a blog application with Node.js and AngularJS. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Understanding and Developing Node Modules [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 7154

article-image-writing-tag-content
Packt
10 Jun 2014
9 min read
Save for later

Writing Tag Content

Packt
10 Jun 2014
9 min read
(For more resources related to this topic, see here.) The NDEF Message is composed by one or more NDEF records, and each record contains the payload and a header in which the data length and type and identifier are stored. In this article, we will create some examples that will allow us to work with the NDEF standard and start writing NFC applications. Working with the NDEF record Android provides an easy way to read and write data when it is formatted as per the standard NDEF. This format is the easiest way for us to work with tag data because it saves us from performing lots of operations and processes of reading and writing raw bytes. So, unless we need to get our hands dirty and write our own protocol, this is the way to go (you can still build it on top of NDEF and achieve a custom, yet standard-based protocol). Getting ready Make sure you have a working Android development environment. If you don't, ADT Bundle is a good environment to start with (you can access it by navigating to http://developer.android.com/sdk/index.html). Make sure you have an NFC-enabled Android device or a virtual test environment. It will be assumed that Eclipse is the development IDE. How to do it... We are going to create an application that writes any NDEF record to a tag by performing the following steps: Open Eclipse and create a new Android application project named NfcBookCh3Example1 with the package name nfcbook.ch3.example1. Make sure the AndroidManifest.xml file is correctly configured. Open the MainActivity.java file located under com.nfcbook.ch3.example1 and add the following class member: private NfcAdapter nfcAdapter; Implement the enableForegroundDispatch method and filter tags by using Ndef and NdefFormatable.Invoke in the onResume method: private void enableForegroundDispatch() { Intent intent = new Intent(this, MainActivity.class).addFlags(Intent.FLAG_RECEIVER_REPLACE_PENDING); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, intent, 0); IntentFilter[] intentFilter = new IntentFilter[] {}; String[][] techList = new String[][] { { android.nfc.tech.Ndef.class.getName() }, { android.nfc.tech.NdefFormatable.class.getName() } }; if ( Build.DEVICE.matches(".*generic.*") ) { //clean up the tech filter when in emulator since it doesn't work properly. techList = null; } nfcAdapter.enableForegroundDispatch(this, pendingIntent, intentFilter, techList); } Instantiate the nfcAdapter class field in the onCreate method: protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Implement the formatTag method: private boolean formatTag(Tag tag, NdefMessage ndefMessage) { try { NdefFormatable ndefFormat = NdefFormatable.get(tag); if (ndefFormat != null) { ndefFormat.connect(); ndefFormat.format(ndefMessage); ndefFormat.close(); return true; } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the writeNdefMessage method: private boolean writeNdefMessage(Tag tag, NdefMessage ndefMessage) { try { if (tag != null) { Ndef ndef = Ndef.get(tag); if (ndef == null) { return formatTag(tag, ndefMessage); } else { ndef.connect(); if (ndef.isWritable()) { ndef.writeNdefMessage(ndefMessage); ndef.close(); return true; } ndef.close(); } } } catch (Exception e) { Log.e("formatTag", e.getMessage()); } return false; } Implement the isNfcIntent method: boolean isNfcIntent(Intent intent) { return intent.hasExtra(NfcAdapter.EXTRA_TAG); } Override the onNewIntent method using the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord ndefEmptyRecord = new NdefRecord(NdefRecord.TNF_EMPTY, new byte[]{}, new byte[]{}, new byte[]{}); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { ndefEmptyRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); if (writeNdefMessage(tag, ndefMessage)) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Failed to write tag", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } } Override the onPause method and insert the following code: @Override protected void onPause() { super.onPause(); nfcAdapter.disableForegroundDispatch(this); } Open the NFC Simulator tool and simulate a few tags. The tag should be marked as modified, as shown in the following screenshot: In the NFC Simulator, a tag name that ends with _LOCKED is not writable, so we won't be able to write any content to the tag and, therefore, a Failed to write tag toast will appear. How it works... NFC intents carry an extra value with them that is a virtual representation of the tag and can be obtained using the NfcAdapter.EXTRA_TAG key. We can get information about the tag, such as the tag ID and its content type, through this object. In the onNewIntent method, we retrieve the tag instance and then use other classes provided by Android to easily read, write, and retrieve even more information about the tag. These classes are as follows: android.nfc.tech.Ndef: This class provides methods to retrieve and modify the NdefMessage object on a tag android.nfc.tech.NdefFormatable: This class provides methods to format tags that are capable of being formatted as NDEF The first thing we need to do while writing a tag is to call the get(Tag tag) method from the Ndef class, which will return an instance of the same class. Then, we need to open a connection with the tag by calling the connect() method. With an open connection, we can now write a NDEF message to the tag by calling the writeNdefMessage(NdefMessage msg) method. Checking whether the tag is writable or not is always a good practice to prevent unwanted exceptions. We can do this by calling the isWritable() method. Note that this method may not account for physical write protection. When everything is done, we call the close() method to release the previously opened connection. If the get(Tag tag) method returns null, it means that the tag is not formatted as per the NDEF format, and we should try to format it correctly. For formatting a tag with the NDEF format, we use the NdefFormatable class in the same way as we did with the Ndef class. However, in this case, we want to format the tag and write a message. This is achieved by calling the format(NdefMessage firstMessage) method. So, we should call the get(Tag tag) method, then open a connection by calling connect(), format the tag, and write the message by calling the format(NdefMessage firstMessage) method. Finally, close the connection with the close() method. If the get(Tag tag) method returns null, it means that the tag is the Android NFC API that cannot automatically format the tag to NDEF. An NDEF message is composed of several NDEF records. Each of these records is composed of four key properties: Type Name Format (TNF): This property defines how the type field should be interpreted Record Type Definition (RTD): This property is used together with the TNF to help Android create the correct NDEF message and trigger the corresponding intent Id: This property lets you define a custom identifier for the record Payload: This property contains the content that will be transported to the record Using the combinations between the TNF and the RTD, we can create several different NDEF records to hold our data and even create our custom types. In this recipe, we created an empty record. The main TNF property values of a record are as follows: - TNF_ABSOLUTE_URI: This is a URI-type field - TNF_WELL_KNOWN: This is an NFC Forum-defined URN - TNF_EXTERNAL_TYPE: This is a URN-type field - TNF_MIME_MEDIA: This is a MIME type based on the type specified The main RTF property values of a record are: - RTD_URI: This is the URI based on the payload - RTD_TEXT: This is the NFC Forum-defined record type Writing a URI-formatted record URI is probably the most common content written to NFC tags. It allows you to share a website, an online service, or a link to the online content. This can be used, for example, in advertising and marketing. How to do it... We are going to create an application that writes URI records to a tag by performing the following steps. The URI will be hardcoded and will point to the Packt Publishing website. Open Eclipse and create a new Android application project named NfcBookCh3Example2. Make sure the AndroidManifest.xml file is configured correctly. Set the minimum SDK version to 14: <uses-sdk android_minSdkVersion="14" /> Implement the enableForegroundDispatch, isNfcIntent, formatTag, and writeNdefMessage methods from the previous recipe—steps 2, 4, 6, and 7. Add the following class member and instantiate it in the onCreate method: private NfcAdapter nfcAdapter; protected void onCreate(Bundle savedInstanceState) { ... nfcAdapter = NfcAdapter.getDefaultAdapter(this); } Override the onNewIntent method and place the following code: @Override protected void onNewIntent(Intent intent) { try { if (isNfcIntent(intent)) { NdefRecord uriRecord = NdefRecord.createUri("http://www.packtpub.com"); NdefMessage ndefMessage = new NdefMessage(new NdefRecord[] { uriRecord }); Tag tag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG); boolean writeResult = writeNdefMessage(tag, ndefMessage); if (writeResult) { Toast.makeText(this, "Tag written!", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Tag write failed!", Toast.LENGTH_SHORT).show(); } } } catch (Exception e) { Log.e("onNewIntent", e.getMessage()); } super.onNewIntent(intent); } Run the application. Tap a tag on your phone or simulate a tap in the NFC Simulator. A Write Successful! toast should appear. How it works... URIs are perfect content for NFC tags because with a relatively small amount of content, we can send users to more rich and complete resources. These types of records are the most easy to create, and this is done by calling the NdefRecord.createUri method and passing the URI as the first parameter. URIs are not necessarily URLs for a website. We can use other URIs that are quite well-known in Android such as the following: - tel:+000 000 000 000 - sms:+000 000 000 000 If we write the tel: uri syntax, the user will be prompted to initiate a phone call. We can always create a URI record the hard way without using the createUri method: public NdefRecord createUriRecord(String uri) { NdefRecord rtdUriRecord = null; try { byte[] uriField; uriField = uri.getBytes("UTF-8"); byte[] payload = new byte[uriField.length + 1]; //+1 for the URI prefix payload[0] = 0x00; //prefixes the URI System.arraycopy(uriField, 0, payload, 1, uriField.length); rtdUriRecord = new NdefRecord(NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_URI, new byte[0], payload); } catch (UnsupportedEncodingException e) { Log.e("createUriRecord", e.getMessage()); } return rtdUriRecord; } The first byte in the payload indicates which prefix should be used with the URI. This way, we don't need to write the whole URI in the tag, which saves some tag space. The following list describes the recognized prefixes: 0x00 No prepending is done 0x01 http://www. 0x02 https://www. 0x03 http:// 0x04 https:// 0x05 tel: 0x06 mailto: 0x07 ftp://anonymous:anonymous@ 0x08 ftp://ftp. 0x09 ftps:// 0x0A sftp:// 0x0B smb:// 0x0C nfs:// 0x0D ftp:// 0x0E dav:// 0x0F news: 0x10 telnet:// 0x11 imap: 0x12 rtsp:// 0x13 urn: 0x14 pop: 0x15 sip: 0x16 sips: 0x17 tftp: 0x18 btspp:// 0x19 btl2cap:// 0x1A btgoep:// 0x1B tcpobex:// 0x1C irdaobex:// 0x1D file:// 0x1E urn:epc:id: 0x1F urn:epc:tag: 0x20 urn:epc:pat: 0x21 urn:epc:raw: 0x22 urn:epc: 0x23 urn:nfc:
Read more
  • 0
  • 0
  • 4732

article-image-automating-performance-analysis-yslow-and-phantomjs
Packt
10 Jun 2014
12 min read
Save for later

Automating performance analysis with YSlow and PhantomJS

Packt
10 Jun 2014
12 min read
(For more resources related to this topic, see here.) Getting ready To run this article, the phantomjs binary will need to be accessible to the continuous integration server, which may not necessarily share the same permissions or PATH as our user. We will also need a target URL. We will use the PhantomJS port of the YSlow library to execute the performance analysis on our target web page. The YSlow library must be installed somewhere on the filesystem that is accessible to the continuous integration server. For our example, we have placed the yslow.js script in the tmp directory of the jenkins user's home directory. To find the jenkins user's home directory on a POSIX-compatible system, first switch to that user using the following command: sudo su - jenkins Then print the home directory to the console using the following command: echo $HOME We will need to have a continuous integration server set up where we can configure the jobs that will execute our automated performance analyses. The example that follows will use the open source Jenkins CI server. Jenkins CI is too large a subject to introduce here, but this article does not assume any working knowledge of it. For information about Jenkins CI, including basic installation or usage instructions, or to obtain a copy for your platform, visit the project website at http://jenkins-ci.org/. Our article uses version 1.552. The combination of PhantomJS and YSlow is in no way unique to Jenkins CI. The example aims to provide a clear illustration of automated performance testing that can easily be adapted to any number of continuous integration server environments. The article also uses several plugins on Jenkins CI to help facilitate our automated testing. These plugins include: Environment Injector Plugin JUnit Attachments Plugin TAP Plugin xUnit Plugin To run that demo site, we must have Node.js installed. In a separate terminal, change to the phantomjs-sandbox directory (in the sample code's directory), and start the app with the following command: node app.js How to do it… To execute our automated performance analyses in Jenkins CI, the first thing that we need to do is set up the job as follows: Select the New Item link in Jenkins CI. Give the new job a name (for example, YSlow Performance Analysis), select Build a free-style software project, and then click on OK. To ensure that the performance analyses are automated, we enter a Build Trigger for the job. Check off the appropriate Build Trigger and enter details about it. For example, to run the tests every two hours, during business hours, Monday through Friday, check Build periodically and enter the Schedule as H 9-16/2 * * 1-5. In the Build block, click on Add build step and then click on Execute shell. In the Command text area of the Execute Shell block, enter the shell commands that we would normally type at the command line, for example: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f junit http ://localhost:3000/css-demo > yslow.xml In the Post-build Actions block, click on Add post-build action and then click on Publish JUnit test result report. In the Test report XMLs field of the Publish JUnit Test Result Report block, enter *.xml. Lastly, click on Save to persist the changes to this job. Our performance analysis job should now run automatically according to the specified schedule; however, we can always trigger it manually by navigating to the job in Jenkins CI and clicking on Build Now. After a few of the performance analyses have completed, we can navigate to those jobs in Jenkins CI and see the results shown in the following screenshots: The landing page for a performance analysis project in Jenkins CI Note the Test Result Trend graph with the successes and failures. The Test Result report page for a specific build Note that the failed tests in the overall analysis are called out and that we can expand specific items to view their details. The All Tests view of the Test Result report page for a specific build Note that all tests in the performance analysis are listed here, regardless of whether they passed or failed, and that we can click into a specific test to view its details. How it works… The driving principle behind this article is that we want our continuous integration server to periodically and automatically execute the YSlow analyses for us so that we can monitor our website's performance over time. This way, we can see whether our changes are having an effect on overall site performance, receive alerts when performance declines, or even fail builds if we fall below our performance threshold. The first thing that we do in this article is set up the build job. In our example, we set up a new job that was dedicated to the YSlow performance analysis task. However, these steps could be adapted such that the performance analysis task is added onto an existing multipurpose job. Next, we configured when our job will run, adding Build Trigger to run the analyses according to a schedule. For our schedule, we selected H 9-16/2 * * 1-5, which runs the analyses every two hours, during business hours, on weekdays. While the schedule that we used is fine for demonstration purposes, we should carefully consider the needs of our project—chances are that a different Build Trigger will be more appropriate. For example, it may make more sense to select Build after other projects are built, and to have the performance analyses run only after the new code has been committed, built, and deployed to the appropriate QA or staging environment. Another alternative would be to select Poll SCM and to have the performance analyses run only after Jenkins CI detects new changes in source control. With the schedule configured, we can apply the shell commands necessary for the performance analyses. As noted earlier, the Command text area accepts the text that we would normally type on the command line. Here we type the following: phantomjs: This is for the PhantomJS executable binary ${HOME}/tmp/yslow.js: This is to refer to the copy of the YSlow library accessible to the Jenkins CI user -i grade: This is to indicate that we want the "Grade" level of report detail -threshold "B": This is to indicate that we want to fail builds with an overall grade of "B" or below -f junit: This is to indicate that we want the results output in the JUnit format http://localhost:3000/css-demo: This is typed in as our target URL > yslow.xml: This is to redirect the JUnit-formatted output to that file on the disk What if PhantomJS isn't on the PATH for the Jenkins CI user? A relatively common problem that we may experience is that, although we have permission on Jenkins CI to set up new build jobs, we are not the server administrator. It is likely that PhantomJS is available on the same machine where Jenkins CI is running, but the jenkins user simply does not have the phantomjs binary on its PATH. In these cases, we should work with the person administering the Jenkins CI server to learn its path. Once we have the PhantomJS path, we can do the following: click on Add build step and then on Inject environment variables; drag-and-drop the Inject environment variables block to ensure that it is above our Execute shell block; in the Properties Content text area, apply the PhantomJS binary's path to the PATH variable, as we would in any other script as follows: PATH=/path/to/phantomjs/bin:${PATH} After setting the shell commands to execute, we jump into the Post-build Actions block and instruct Jenkins CI where it can find the JUnit XML reports. As our shell command is redirecting the output into a file that is directly in the workspace, it is sufficient to enter an unqualified *.xml here. Once we have saved our build job in Jenkins CI, the performance analyses can begin right away! If we are impatient for our first round of results, we can click on Build Now for our job and watch as it executes the initial performance analysis. As the performance analyses are run, Jenkins CI will accumulate the results on the filesystem, keeping them until they are either manually removed or until a discard policy removes old build information. We can browse these accumulated jobs in the web UI for Jenkins CI, clicking on the Test Result link to drill into them. There's more… The first thing that bears expanding upon is that we should be thoughtful about what we use as the target URL for our performance analysis job. The YSlow library expects a single target URL, and as such, it is not prepared to handle a performance analysis job that is otherwise configured to target two or more URLs. As such, we must select a strategy to compensate for this, for example: Pick a representative page: We could manually go through our site and select the single page that we feel best represents the site as a whole. For example, we could pick the page that is "most average" compared to the other pages ("most will perform at about this level"), or the page that is most likely to be the "worst performing" page ("most pages will perform better than this"). With our representative page selected, we can then extrapolate performance for other pages from this specimen. Pick a critical page: We could manually select the single page that is most sensitive to performance. For example, we could pick our site's landing page (for example, "it is critical to optimize performance for first-time visitors"), or a product demo page (for example, "this is where conversions happen, so this is where performance needs to be best"). Again, with our performance-sensitive page selected, we can optimize the general cases around the specific one. Set up multiple performance analysis jobs: If we are not content to extrapolate site performance from a single specimen page, then we could set up multiple performance analysis jobs—one for each page on the site that we want to test. In this way, we could (conceivably) set up an exhaustive performance analysis suite. Unfortunately, the results will not roll up into one; however, once our site is properly tuned, we need to only look for the telltale red ball of a failed build in Jenkins CI. The second point worth considering is—where do we point PhantomJS and YSlow for the performance analysis? And how does the target URL's environment affect our interpretation of the results? If we are comfortable running our performance analysis against our production deploys, then there is not much else to discuss—we are assessing exactly what needs to be assessed. But if we are analyzing performance in production, then it's already too late—the slow code has already been deployed! If we have a QA or staging environment available to us, then this is potentially better; we can deploy new code to one of these environments for integration and performance testing before putting it in front of the customers. However, these environments are likely to be different from production despite our best efforts. For example, though we may be "doing everything else right", perhaps our staging server causes all traffic to come back from a single hostname, and thus, we cannot properly mimic a CDN, nor can we use cookie-free domains. Do we lower our threshold grade? Do we deactivate or ignore these rules? How can we tell apart the false negatives from the real warnings? We should put some careful thought into this—but don't be disheartened—better to have results that are slightly off than to have no results at all! Using TAP format If JUnit formatted results turn out to be unacceptable, there is also a TAP plugin for Jenkins CI. Test Anything Protocol (TAP) is a plain text-based report format that is relatively easy for both humans and machines to read. With the TAP plugin installed in Jenkins CI, we can easily configure our performance analysis job to use it. We would just make the following changes to our build job: In the Command text area of our Execute shell block, we would enter the following command: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f tap http ://localhost:3000/css-demo > yslow.tap In the Post-build Actions block, we would select Publish TAP Results instead of Publish JUnit test result report and enter yslow.tap in the Test results text field. Everything else about using TAP instead of JUnit-formatted results here is basically the same. The job will still run on the schedule we specify, Jenkins CI will still accumulate test results for comparison, and we can still explore the details of an individual test's outcomes. The TAP plugin adds an additional link in the job for us, TAP Extended Test Results, as shown in the following screenshot: One thing worth pointing out about using TAP results is that it is much easier to set up a single job to test multiple target URLs within a single website. We can enter multiple tests in the Execute Shell block (separating them with the && operator) and then set our Test Results target to be *.tap. This will conveniently combine the results of all our performance analyses into one. Summary In this article, we saw setting up of an automated performance analysis task on a continuous integration server (for example, Jenkins CI) using PhantomJS and the YSlow library. Resources for Article: Further resources on this subject: Getting Started [article] Introducing a feature of IntroJs [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 5514
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-metasploit-custom-modules-and-meterpreter-scripting
Packt
03 Jun 2014
10 min read
Save for later

Metasploit Custom Modules and Meterpreter Scripting

Packt
03 Jun 2014
10 min read
(For more resources related to this topic, see here.) Writing out a custom FTP scanner module Let's try and build a simple module. We will write a simple FTP fingerprinting module and see how things work. Let's examine the code for the FTP module: require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::Ftp include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Apex FTP Detector', 'Description' => '1.0', 'Author' => 'Nipun Jaswal', 'License' => MSF_LICENSE ) register_options( [ Opt::RPORT(21), ], self.class) End We start our code by defining the required libraries to refer to. We define the statement require 'msf/core' to include the path to the core libraries at the very first step. Then, we define what kind of module we are creating; in this case, we are writing an auxiliary module exactly the way we did for the previous module. Next, we define the library files we need to include from the core library set. Here, the include Msf::Exploit::Remote::Ftp statement refers to the /lib/msf/core/exploit/ftp.rb file and include Msf::Auxiliary::Scanner refers to the /lib/msf/core/auxiliary/scanner.rb file. We have already discussed the scanner.rb file in detail in the previous example. However, the ftp.rb file contains all the necessary methods related to FTP, such as methods for setting up a connection, logging in to the FTP service, sending an FTP command, and so on. Next, we define the information of the module we are writing and attributes such as name, description, author name, and license in the initialize method. We also define what options are required for the module to work. For example, here we assign RPORT to port 21 by default. Let's continue with the remaining part of the module: def run_host(target_host) connect(true, false) if(banner) print_status("#{rhost} is running #{banner}") end disconnect end end We define the run_host method, which will initiate the process of connecting to the target by overriding the run_host method from the /lib/msf/core/auxiliary/scanner.rb file. Similarly, we use the connect function from the /lib/msf/core/exploit/ftp.rb file, which is responsible for initializing a connection to the host. We supply two parameters into the connect function, which are true and false. The true parameter defines the use of global parameters, whereas false turns off the verbose capabilities of the module. The beauty of the connect function lies in its operation of connecting to the target and recording the banner of the FTP service in the parameter named banner automatically, as shown in the following screenshot: Now we know that the result is stored in the banner attribute. Therefore, we simply print out the banner at the end and we disconnect the connection to the target. This was an easy module, and I recommend that you should try building simple scanners and other modules like these. Nevertheless, before we run this module, let's check whether the module we just built is correct with regards to its syntax or not. We can do this by passing the module from an in-built Metasploit tool named msftidy as shown in the following screenshot: We will get a warning message indicating that there are a few extra spaces at the end of line number 19. Therefore, when we remove the extra spaces and rerun msftidy, we will see that no error is generated. This marks the syntax of the module to be correct. Now, let's run this module and see what we gather: We can see that the module ran successfully, and it has the banner of the service running on port 21, which is Baby FTP Server. For further reading on the acceptance of modules in the Metasploit project, refer to https://github.com/rapid7/metasploit-framework/wiki/Guidelines-for-Accepting-Modules-and-Enhancements. Writing out a custom HTTP server scanner Now, let's take a step further into development and fabricate something a bit trickier. We will create a simple fingerprinter for HTTP services, but with a slightly more complex approach. We will name this file http_myscan.rb as shown in the following code snippet: require 'rex/proto/http' require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::HttpClient include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Server Service Detector', 'Description' => 'Detects Service On Web Server, Uses GET to Pull Out Information', 'Author' => 'Nipun_Jaswal', 'License' => MSF_LICENSE ) end We include all the necessary library files as we did for the previous modules. We also assign general information about the module in the initialize method, as shown in the following code snippet: def os_fingerprint(response) if not response.headers.has_key?('Server') return "Unknown OS (No Server Header)" end case response.headers['Server'] when /Win32/, /(Windows/, /IIS/ os = "Windows" when /Apache// os = "*Nix" else os = "Unknown Server Header Reporting: "+response.headers['Server'] end return os end def pb_fingerprint(response) if not response.headers.has_key?('X-Powered-By') resp = "No-Response" else resp = response.headers['X-Powered-By'] end return resp end def run_host(ip) connect res = send_request_raw({'uri' => '/', 'method' => 'GET' }) return if not res os_info=os_fingerprint(res) pb=pb_fingerprint(res) fp = http_fingerprint(res) print_status("#{ip}:#{rport} is running #{fp} version And Is Powered By: #{pb} Running On #{os_info}") end end The preceding module is similar to the one we discussed in the very first example. We have the run_host method here with ip as a parameter, which will open a connection to the host. Next, we have send_request_raw, which will fetch the response from the website or web server at / with a GET request. The result fetched will be stored into the variable named res. We pass the value of the response in res to the os_fingerprint method. This method will check whether the response has the Server key in the header of the response; if the Server key is not present, we will be presented with a message saying Unknown OS. However, if the response header has the Server key, we match it with a variety of values using regex expressions. If a match is made, the corresponding value of os is sent back to the calling definition, which is the os_info parameter. Now, we will check which technology is running on the server. We will create a similar function, pb_fingerprint, but will look for the X-Powered-By key rather than Server. Similarly, we will check whether this key is present in the response code or not. If the key is not present, the method will return No-Response; if it is present, the value of X-Powered-By is returned to the calling method and gets stored in a variable, pb. Next, we use the http_fingerprint method that we used in the previous examples as well and store its result in a variable, fp. We simply print out the values returned from os_fingerprint, pb_fingerprint, and http_fingerprint using their corresponding variables. Let's see what output we'll get after running this module: Msf auxiliary(http_myscan) > run [*]192.168.75.130:80 is running Microsoft-IIS/7.5 version And Is Powered By: ASP.NET Running On Windows [*] Scanned 1 of 1 hosts (100% complete) [*] Auxiliary module execution completed Writing out post-exploitation modules Now, as we have seen the basics of module building, we can take a step further and try to build a post-exploitation module. A point to remember here is that we can only run a post-exploitation module after a target compromises successfully. So, let's begin with a simple drive disabler module which will disable C: at the target system: require 'msf/core' require 'rex' require 'msf/core/post/windows/registry' class Metasploit3 < Msf::Post include Msf::Post::Windows::Registry def initialize super( 'Name' => 'Drive Disabler Module', 'Description' => 'C Drive Disabler Module', 'License' => MSF_LICENSE, 'Author' => 'Nipun Jaswal' ) End We started in the same way as we did in the previous modules. We have added the path to all the required libraries we need in this post-exploitation module. However, we have added include Msf::Post::Windows::Registry on the 5th line of the preceding code, which refers to the /core/post/windows/registry.rb file. This will give us the power to use registry manipulation functions with ease using Ruby mixins. Next, we define the type of module and the intended version of Metasploit. In this case, it is Post for post-exploitation and Metasploit3 is the intended version. We include the same file again because this is a single file and not a separate directory. Next, we define necessary information about the module in the initialize method just as we did for the previous modules. Let's see the remaining part of the module: def run key1="HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\" print_line("Disabling C Drive") meterpreter_registry_setvaldata(key1,'NoDrives','4','REG_DWORD') print_line("Setting No Drives For C") meterpreter_registry_setvaldata(key1,'NoViewOnDrives','4','REG_DWORD') print_line("Removing View On The Drive") print_line("Disabled C Drive") end end #class We created a variable called key1, and we stored the path of the registry where we need to create values to disable the drives in it. As we are in a meterpreter shell after the exploitation has taken place, we will use the meterpreter_registry_setval function from the /core/post/windows/registry.rb file to create a registry value at the path defined by key1. This operation will create a new registry key named NoDrives of the REG_DWORD type at the path defined by key1. However, you might be wondering why we have supplied 4 as the bitmask. To calculate the bitmask for a particular drive, we have a little formula, 2^([drive character serial number]-1) . Suppose, we need to disable the C drive. We know that character C is the third character in alphabets. Therefore, we can calculate the exact bitmask value for disabling the C drive as follows: 2^ (3-1) = 2^2= 4 Therefore, the bitmask is 4 for disabling C:. We also created another key, NoViewOnDrives, to disable the view of these drives with the exact same parameters. Now, when we run this module, it gives the following output: So, let's see whether we have successfully disabled C: or not: Bingo! No C:. We successfully disabled C: from the user's view. Therefore, we can create as many post-exploitation modules as we want according to our need. I recommend you put some extra time toward the libraries of Metasploit. Make sure you have user-level access rather than SYSTEM for the preceding script to work, as SYSTEM privileges will not create the registry under HKCU. In addition to this, we have used HKCU instead of writing HKEY_CURRENT_USER, because of the inbuilt normalization that will automatically create the full form of the key. I recommend you check the registry.rb file to see the various available methods.
Read more
  • 0
  • 0
  • 8054

article-image-3d-websites
Packt
23 May 2014
10 min read
Save for later

3D Websites

Packt
23 May 2014
10 min read
(For more resources related to this topic, see here.) Creating engaging scenes There is no adopted style for a 3D website. No metaphor can best describe the process of designing the 3D web. Perhaps what we know the most is what does not work. Often, our initial concept is to model the real world. An early design that was used years ago involved a university that wanted to use its campus map to navigate through its website. One found oneself dragging the mouse repeatedly, as fast as one could, just to get to the other side of campus. A better design would've been a book shelf where everything was in front of you. To view the chemistry department, just grab the chemistry book, and click on the virtual pages to view the faculty, curriculum, and other department information. Also, if you needed to cross-reference this with the math department's upcoming schedule, you could just grab the math book. Each attempt adds to our knowledge and gets us closer to something better. What we know is what most other applications of computer graphics learned—that reality might be a starting point, but we should not let it interfere with creativity. 3D for the sake of recreating the real world limits our innovative potential. Following this starting point, strip out the parts bound by physics, such as support beams or poles that serve no purpose in a virtual world. Such items make the rendering slower by just existing. Once we break these bounds, the creative process takes over—perhaps a whimsical version, a parody, something dark and scary, or a world-emphasizing story. Characters in video games and animated movies take on stylized features. The characters are purposely unrealistic or exaggerated. One of the best animations to exhibit this is Chris Landreth's The Spine, Ryan (Academy Award for best-animated short film in 2004), and his earlier work in Psychological Driven Animation, where the characters break apart by the ravages of personal failure (https://www.nfb.ca/film/ryan). This demonstration will describe some of the more difficult technical issues involved with lighting, normal maps, and the efficient sharing of 3D models. The following scene uses 3D models and textures maps from previous demonstrations but with techniques that are more complex. Engage thrusters This scene has two lampposts and three brick walls, yet we only read in the texture map and 3D mesh for one of each and then reuse the same models several times. This has the obvious advantage that we do not need to read in the same 3D models several times, thus saving download time and using less memory. A new function, copyObject(), was created that currently sits inside the main WebGL file, although it can be moved to mesh3dObject.js. In webGLStart(), after the original objects were created, we call copyObject(), passing along the original object with the unique name, location, rotation, and scale. In the following code, we copy the original streetLight0Object into a new streetLight1Object: streetLight1Object = copyObject( streetLight0Object, "streetLight1", streetLight1Location, [1, 1, 1], [0, 0, 0] ); Inside copyObject(), we first create the new mesh and then set the unique name, location (translation), rotation, and scale: function copyObject(original, name, translation, scale, rotation) { meshObjectArray[ totalMeshObjects ] = new meshObject(); newObject = meshObjectArray[ totalMeshObjects ]; newObject.name = name; newObject.translation = translation; newObject.scale = scale; newObject.rotation = rotation; The object to be copied is named original. We will not need to set up new buffers since the new 3D mesh can point to the same buffers as the original object: newObject.vertexBuffer = original.vertexBuffer; newObject.indexedFaceSetBuffer = original.indexedFaceSetBuffer; newObject.normalsBuffer = original.normalsBuffer; newObject.textureCoordBuffer = original.textureCoordBuffer; newObject.boundingBoxBuffer = original.boundingBoxBuffer; newObject.boundingBoxIndexBuffer = original.boundingBoxIndexBuffer; newObject.vertices = original.vertices; newObject.textureMap = original.textureMap; We do need to create a new bounding box matrix since it is based on the new object's unique location, rotation, and scale. In addition, meshLoaded is set to false. At this stage, we cannot determine if the original mesh and texture map have been loaded since that is done in the background: newObject.boundingBoxMatrix = mat4.create(); newObject.meshLoaded = false; totalMeshObjects++; return newObject; } There is just one more inclusion to inform us that the original 3D mesh and texture map(s) have been loaded inside drawScene(): streetLightCover1Object.meshLoaded = streetLightCover0Object.meshLoaded; streetLightCover1Object.textureMap = streetLightCover0Object.textureMap; This is set each time a frame is drawn, and thus, is redundant once the mesh and texture map have been loaded, but the additional code is a very small hit in performance. Similar steps are performed for the original brick wall and its two copies. Most of the scene is programmed using fragment shaders. There are four lights: the two streetlights, the neon Products sign, and the moon, which sets and rises. The brick wall uses normal maps. However, it is more complex here; the use of spotlights and light attenuation, where the light fades over a distance. The faint moon light, however, does not fade over a distance. Opening scene with four light sources: two streetlights, the Products neon sign, and the moon This program has only three shaders: LightsTextureMap, used by the brick wall with a texture normal map; Lights, used for any object that is illuminated by one or more lights; and Illuminated, used by the light sources such as the moon, neon sign, and streetlight covers. The simplest out of these fragment shaders is Illuminated. It consists of a texture map and the illuminated color, uLightColor. For many objects, the texture map would simply be a white placeholder. However, the moon uses a texture map, available for free from NASA that must be merged with its color: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); gl_FragColor = vec4(fragmentColor.rgb * uLightColor, 1.0); The light color also serves another purpose, as it will be passed on to the other two fragment shaders since each adds its own individual color: off-white for the streetlights, gray for the moon, and pink for the neon sign. The next step is to use the shaderLights fragment shader. We begin by setting the ambient light, which is a dim light added to every pixel, usually about 0.1, so nothing is pitch black. Then, we make a call for each of our four light sources (two streetlights, the moon, and the neon sign) to the calculateLightContribution() function: void main(void) { vec3 lightWeighting = vec3(uAmbientLight, uAmbientLight, uAmbientLight); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight0Loc, uSpotLightDir, false); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight1Loc, uSpotLightDir, false); lightWeighting += uMoonLightColor * calculateLightContribution(uMoonLightPos, vec3(0.0, 0.0, 0.0), true); lightWeighting += uProductTextColor * calculateLightContribution(uProductTextLoc, vec3(0.0, 0.0, 0.0), true); All four calls to calculateLightContribution() are multiplied by the light's color (white for the streetlights, gray for the moon, and pink for the neon sign). The parameters in the call to calculateLightContribution(vec3, vec3, vec3, bool) are: location of the light, its direction, the pixel's normal, and the point light. This parameter is true for a point light that illuminates in all directions, or false if it is a spotlight that points in a specific direction. Since point lights such as the moon or neon sign have no direction, their direction parameter is not used. Therefore, their direction parameter is set to a default, vec3(0.0, 0.0, 0.0). The vec3 lightWeighting value accumulates the red, green, and blue light colors at each pixel. However, these values cannot exceed the maximum of 1.0 for red, green, and blue. Colors greater than 1.0 are unpredictable based on the graphics card. So, the red, green, and blue light colors must be capped at 1.0: if ( lightWeighting.r > 1.0 ) lightWeighting.r = 1.0; if ( lightWeighting.g > 1.0 ) lightWeighting.g = 1.0; if ( lightWeighting.b > 1.0 ) lightWeighting.b = 1.0; Finally, we calculate the pixels based on the texture map. Only the street and streetlight posts use this shader, and neither have any tiling, but the multiplication by uTextureMapTiling was included in case there was tiling. The fragmentColor based on the texture map is multiplied by lightWeighting—the accumulation of our four light sources for the final color of each pixel: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)); gl_FragColor = vec4(fragmentColor.rgb * lightWeighting.rgb, 1.0); } In the calculateLightContribution() function, we begin by determining the angle between the light's direction and point's normal. The dot product is the cosine between the light's direction to the pixel and the pixel's normal, which is also known as Lambert's cosine law (http://en.wikipedia.org/wiki/Lambertian_reflectance): vec3 distanceLightToPixel = vec3(vPosition.xyz - lightLoc); vec3 vectorLightPosToPixel = normalize(distanceLightToPixel); vec3 lightDirNormalized = normalize(lightDir); float angleBetweenLightNormal = dot( -vectorLightPosToPixel, vTransformedNormal ); A point light shines in all directions, but a spotlight has a direction and an expanding cone of light surrounding this direction. For a pixel to be lit by a spotlight, that pixel must be in this cone of light. This is the beam width area where the pixel receives the full amount of light, which fades out towards the cut-off angle that is the angle where there is no more light coming from this spotlight: With texture maps removed, we reveal the value of the dot product between the pixel normal and direction of the light if ( pointLight) { lightAmt = 1.0; } else { // spotlight float angleLightToPixel = dot( vectorLightPosToPixel, lightDirNormalized ); // note, uStreetLightBeamWidth and uStreetLightCutOffAngle // are the cosines of the angles, not actual angles if ( angleLightToPixel >= uStreetLightBeamWidth ) { lightAmt = 1.0; } if ( angleLightToPixel > uStreetLightCutOffAngle ) { lightAmt = (angleLightToPixel - uStreetLightCutOffAngle) / (uStreetLightBeamWidth - uStreetLightCutOffAngle); } } After determining the amount of light at that pixel, we calculate attenuation, which is the fall-off of light over a distance. Without attenuation, the light is constant. The moon has no light attenuation since it's dim already, but the other three lights fade out at the maximum distance. The float maxDist = 15.0; code snippet says that after 15 units, there is no more contribution from this light. If we are less than 15 units away from the light, reduce the amount of light proportionately. For example, a pixel 10 units away from the light source receives (15-10)/15 or 1/3 the amount of light: attenuation = 1.0; if ( uUseAttenuation ) { if ( length(distanceLightToPixel) < maxDist ) { attenuation = (maxDist - length(distanceLightToPixel))/maxDist; } else attenuation = 0.0; } Finally, we multiply the values that make the light contribution and we are done: lightAmt *= angleBetweenLightNormal * attenuation; return lightAmt; Next, we must account for the brick wall's normal map using the shaderLightsNormalMap-fs fragment shader. The normal is equal to rgb * 2 – 1. For example, rgb (1.0, 0.5, 0.0), which is orange, would become a normal (1.0, 0.0, -1.0). This normal is converted to a unit value or normalized to (0.707, 0, -0.707): vec4 textureMapNormal = vec4( (texture2D(uSamplerNormalMap, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)) * 2.0) - 1.0 ); vec3 pixelNormal = normalize(uNMatrix * normalize(textureMapNormal.rgb) ); A normal mapped brick (without red brick texture image) reveals how changing the pixel normal altersthe shading with various light sources We call the same calculateLightContribution() function, but we now pass along pixelNormal calculated using the normal texture map: calculateLightContribution(uSpotLight0Loc, uSpotLightDir, pixelNormal, false); From here, much of the code is the same, except we use pixelNormal in the dot product to determine the angle between the normal and the light sources: float angleLightToTextureMap = dot( -vectorLightPosToPixel, pixelNormal ); Now, angleLightToTextureMap replaces angleBetweenLightNormal because we are no longer using the vertex normal embedded in the 3D mesh's .obj file, but instead we use the pixel normal derived from the normal texture map file, brickNormalMap.png. A normal mapped brick wall with various light sources Objective complete – mini debriefing This comprehensive demonstration combined multiple spot and point lights, shared 3D meshes instead of loading the same 3D meshes, and deployed normal texture maps for a real 3D brick wall appearance. The next step is to build upon this demonstration, inserting links to web pages found on a typical website. In this example, we just identified a location for Products using a neon sign to catch the users' attention. As a 3D website is built, we will need better ways to navigate this virtual space and this is covered in the following section.
Read more
  • 0
  • 0
  • 12252

article-image-interacting-data-dashboards
Packt
23 May 2014
11 min read
Save for later

Interacting with Data for Dashboards

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Hierarchies for revealing the dashboard message It can become difficult to manage data, particularly if you have many columns. It can become more difficult if they are similarly named too. As you'd expect, Tableau helps you to organize your data so that it is easier to navigate and keep track of everything. From the user perspective, hierarchies improve navigation and use by allowing the users to navigate from a headline down to a detailed level. From the Tableau perspective, hierarchies are groups of columns that are arranged in increasing levels of granularity. Each deeper level of the hierarchy refers to more specific details of the data. Some hierarchies are natural hierarchies, such as date. So, say Tableau works out that a column is a date and automatically adds in a hierarchy in this order: year, quarter, month, week, and date. You have seen this already, for example, when you dragged a date across to the Columns shelf, Tableau automatically turned the date into a year. Some hierarchies are not always immediately visible. These hierarchies would need to be set up, and we will look at setting up a product hierarchy that straddles across different tables. This is a nice feature because it means that the hierarchy can reflect the users' understanding of the data and isn't determined only by the underlying data. Getting ready In this article, we will use the existing workbook that you created for this article. We will use the same data. For this article, let's take a copy of the existing worksheet and call it Hierarchies. To do this, right-click on the Worksheet tab and select the Duplicate Sheet option. You can then rename the sheet to Hierarchies. How to do it... Navigate to the DimProductCategory dimension and right-click on the EnglishProductCategoryName attribute. From the pop-up menu, select the Create Hierarchy feature. You can see its location in the following illustration: When you select the option, you will get a textbox entitled Create Hierarchy, which will ask you to specify the name of the hierarchy. We will call our hierarchy Product Category. Once you have entered this into the textbox, click on OK. Your hierarchy will now be created, and it will appear at the bottom of the Dimensions list on the left-hand side of Tableau's interface. Next, go to the DimProductSubcategory dimension and look for the EnglishProductSubCategoryName attribute. Drag it to the Product Category hierarchy under EnglishProductCategoryName, which is already part of the Product Category hierarchy. Now we will add the EnglishProductName attribute, which we will find under the DimProduct dimension. Drag-and-drop it under the EnglishProductSubCategoryName attribute that is already under the Product Category hierarchy. The Product Category hierarchy should now look as follows: The Product Category hierarchy will be easier to understand if we rename the attributes. To do this, right-click on each attribute and choose Rename. Change EnglishProductCategoryName to Product Category. Rename EnglishProductSubcategoryName to Product Subcategory by right-clicking on the attribute and selecting Rename. Rename EnglishProductName to Product. Once you have done this, the hierarchy should look as follows: You can now use your hierarchy to change the details that you wish to see in the data visualization. Now, we will use Product Category of our data visualization rather than Dimension. Remove everything from the Rows shelf and drag the Product Category hierarchy to the Rows shelf. Then, click on the plus sign; it will open the hierarchy, and you will see data for the next level under Product Category, which are subcategories. An example of the Tableau workbook is given in the following illustration. You can see that the biggest differences occurred in the Bikes product category, and they occurred in the years 2006 and 2007 for the Mountain Bikes and Road Bikes categories. To summarize, we have used the Hierarchy feature in Tableau to vary the degree of analysis we see in the dashboard. How it works… Tableau saves the additional information as part of the Tableau workbook. When you share the workbook, the hierarchies will be preserved. The Tableau workbook would need revisions if the hierarchy is changed, or if you add in new dimensions and they need to be maintained. Therefore, they may need some additional maintenance. However, they are very useful features and worth the little extra touch they offer in order to help the dashboard user. There's more... Dashboarding data usually involves providing "at a glance" information for team members to clearly see the issues in the data and to make actionable decisions. Often, we don't need to provide further information unless we are asked for it, and it is a very useful feature that will help us answer more detailed questions. It saves us space on the page and is a very useful dashboard feature. Let's take the example of a business meeting where the CEO wants to know more about the biggest differences or "swings" in the sales amount by category, and then wants more details. The Tableau analyst can quickly place a hierarchy in order to answer more detailed questions if required, and this is done quite simply as described here. Hierarchies also allow us to encapsulate business rules into the dashboard. In this article, we used product hierarchies. We could also add in hierarchies for different calendars, for example, in order to reflect different reporting periods. This will allow the dashboard to be easily reused in order to reflect different reporting calendars, say, you want to show data according to a fiscal year or a calendar year. You could have two different hierarchies: one for fiscal and the other for the calendar year. The dashboard could contain the same measures but sliced by different calendars according to user requirements. The hierarchies feature fits nicely with the Golden Mantra of Information Visualization, since it allows us to summarize the data and then drill down into it as the next step. See also http://www.tableausoftware.com/about/blog/2013/4/lets-talk-about-sets-23043 Classifying your data for dashboards Bins are a simple way of categorizing and bucketing values, depending on the measure value. So, for example, you could "bin" customers depending on their age group or the number of cars that they own. Bins are useful for dashboards because they offer a summary view of the data, which is essential for the "at a glance" function of dashboards. Tableau can create bins automatically, or we can also set up bins manually using calculated fields. This article will show both versions in order to meet the business needs. Getting ready In this article, we will use the existing workbook that you created for this article. We will use the same data. For this article, let's take a copy of the Hierarchies worksheet and by right-clicking on the Worksheet tab, select the Duplicate Sheet option. You can then rename the sheet to Bins. How to do it... Once you have your Bins worksheet in place, right-click on the SalesAmount measure and select the Create Bin option. You can see an example of this in the following screenshot: We will change the value to 5. Once you've done this, press the Load button to reveal the Min, Max, and Diff values of the data, as shown in the following screenshot: When you click on the OK button, you will see a bin appear under the Dimensions area. The following is an example of this: Let's test out our bins! To do this, remove everything from the Rows shelf, leaving only the Product Category hierarchy. Remove any filters from the worksheet and all of the calculations in the Marks shelf. Next, drag SalesAmount (bin) to the Marks area under the Detail and Tooltip buttons. Once again, take SalesAmount (bin) and drag it to the Color button on the Marks shelf. Now, we will change the size of the data points to reflect the size of the elements. To do this, drag SalesAmount (bin) to the Size button. You can vary the overall size of the elements by right-clicking on the Size button and moving the slider horizontally so that you can get your preferred size. To neaten the image, right-click on the Date column heading and select Hide Field Names for Columns from the list. The Tableau worksheet should now look as follows: This allows us to see some patterns in the data. We can also see more details if we click on the data points; you can see an illustration of the details in the data in the following screenshot: However, we might find that the automated bins are not very clear to business users. We can see in the previous screenshot that the SalesAmount(bin) value is £2,440.00. This may not be meaningful to business users. How can we set the bins so that they are meaningful to business users, rather than being automated by Tableau? For example, what if the business team wants to know about the proportion of their sales that fell into well-defined buckets, sliced by years? Fortunately, we can emulate the same behavior as in bins by simply using a calculated field. We can create a very simple IF… THEN ... ELSEIF formula that will place the sales amounts into buckets, depending on the value of the sales amount. These buckets are manually defined using a calculated field, and we will see how to do this now. Before we begin, take a copy of the existing worksheet called Bins and rename it to Bins Set Manually. To do this, right-click on the Sales Amount metric and choose the Create Calculated Field option. In the calculated field, enter the following formula: If [SalesAmount] <= 1000 THEN "1000" ELSEIF [SalesAmount] <= 2000 THEN "2000" ELSEIF [SalesAmount] <= 3000 THEN "3000" ELSEIF [SalesAmount] <= 4000 THEN "4000" ELSEIF [SalesAmount] <= 5000 THEN "5000" ELSEIF [SalesAmount] <= 6000 THEN "6000" ELSE "7000" END When this formula is entered into the Calculated Field window, it looks like what the following screenshot shows. Rename the calculated field to SalesAmount Buckets. Now that we have our calculated field in place, we can use it in our Tableau worksheet to create a dashboard component. On the Columns shelf, place the SalesAmount Buckets calculated field and the Year(Date) dimension attribute. On the Rows shelf, place Sum(SalesAmount) from the Measures section. Place the Product Category hierarchy on the Color button. Drag SalesAmount Buckets from the Dimensions pane to the Size button on the Marks shelf. Go to the Show Me panel and select the Circle View option. This will provide a dot plot feel to data visualization. You can resize the chart by hovering the mouse over the foot of the y axis where the £0.00 value is located. Once you're done with this, drag-and-drop the activities. The Tableau worksheet will look as it appears in the following screenshot: To summarize, we have created bins using Tableau's automatic bin feature. We have also looked at ways of manually creating bins using the Calculated Field feature. How it works... Bins are constructed using a default Bins feature in Tableau, and we can use Calculated Fields in order to make them more useful and complex. They are stored in the Tableau workbook, so you will be able to preserve your work if you send it to someone else. In this article, we have also looked at dot plot visualization, which is a very simple way of representing data that does not use a lot of "ink". The data/ink ratio is useful to simplify a data visualization in order to get the message of the data across very clearly. Dot plots might be considered old fashioned, but they are very effective and are perhaps underused. We can see from the screenshot that the 3000 bucket contained the highest number of sales amount. We can also see that this figure peaks in the year 2007 and then falls in 2008. This is a dashboard element that could be used as a start for further analysis. For example, business users will want to know the reason for the fall in sales for the highest occurring "bin". See also Visual Display of Quantitative Information, Edward Tufte, Graphics Press USA
Read more
  • 0
  • 0
  • 2137

article-image-working-sharing-plugin
Packt
23 May 2014
11 min read
Save for later

Working with the sharing plugin

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Now that we've dealt with the device events, let's get to the real meat of the project: let's add the sharing plugin and see how to use it. Getting ready Before continuing, be sure to add the plugin to your project: cordova plugin add https ://github.com/leecrossley/cordova-plugin-social-message.git Getting on with it This particular plugin is one of many socialnetwork plugins. Each one has its benefits and each one has its problems, and the available plugins are changing rapidly. This particular plugin is very easy to use, and supports a reasonable amount of social networks. On iOS, Facebook, Twitter, Mail, and Flickr are supported. On Android, any installed app that registers with the intent to share is supported. The full documentation is available at https://github.com/leecrossley/cordova-plugin-social-message at the time of writing this. It is easy to follow if you need to know more than what we cover here. To show a sharing sheet (the appearance varies based on platform and operation system), all we have to do is this: window.socialshare.send ( message ); message is an object that contains any of the following properties: text: This is the main content of the message. subject: This is the subject of the message. This is only applicable while sending e-mails; most other social networks will ignore this value. url: This is a link to attach to the message. image: This is an absolute path to the image in order to attach it to the message. It must begin with file:/// and the path should be properly escaped (that is, spaces should become %20, and so on). activityTypes (only for iOS): This supports activities on various social networks. Valid values are: PostToFacebook, PostToTwitter, PostToWeibo, Message, Mail, Print, CopyToPasteboard, AssignToContact, and SaveToCameraRoll. In order to create a simple message to share, we can use the following code: var message = {     text: "something to send" } window.socialshare.send ( message ); To add an image, we can go a step further, shown as follows: var message = {     text: "the caption",     image: "file://var/mobile/…/image.png" } window.socialshare.send ( message ); Once this method is called, the sharing sheet will appear. On iOS 7, you'll see something like the following screenshot: On Android, you will see something like the following screenshot: What did we do? In this section, we installed the sharing plugin and we learned how to use it. In the next sections, we'll cover the modifications required to use this plugin. Modifying the text note edit view We've dispatched most of the typical sections in this project—there's not really any user interface to design, nor are there any changes to the actual note models. All we need to do is modify the HTML template a little to include a share button and add the code to use the plugin. Getting on with it First, let's alter the template in www/html/textNoteEditView.html. I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar ui-       avoid-tool-bar">       <textarea class="ui-text-box" >%NOTE_CONTENTS%</textarea>     </div><div class="ui-tool-bar"><div class="ui-bar-button-group ui-align-left"></div><div class="ui-bar-button-group ui-align-center">     </div>         <div class="ui-bar-button-group ui-align-right">        <div class="ui-bar-button ui-background-tint-color ui- glyph ui-glyph-share share-button"></div></div>    </div></body></html> Now, let's make the modifications to the view in www/js/app/views/textNoteEditView.js. First, we need to add an internal property that references the share button: self._shareButton = null; Next, we need to add code to renderToElement so that we can add an event handler to the share button. We'll do a little bit of checking here to see if we've found the icon, because we don't support sharing of videos and sounds and we don't include that asset in those views. If we didn't have the null check, those views would fail to work. Consider the following code snippet: self.renderToElement = function () {   …   self._shareButton = self.element.querySelector ( ".share-button"     );   if (self._shareButton !== null) {     Hammer ( self._shareButton ).on("tap", self.shareNote);   }   … } Finally, we need to add the method that actually shares the note. Note that we save the note before we share it, since that's how the data in the DOM gets transmitted to the note model. Consider the following code snippet: self.shareNote = function () {   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   window.socialmessage.send ( message ); } What did we do? First, we added a toolbar to the view that looks like the following screenshot—note the new sharing icon: Then, we added the code that shares the note and attaches that code to the Share button. Here's an example of us sending a tweet from a note on iOS: What else do I need to know? Don't forget that social networks often have size limits. For example, Twitter only supports 140 characters, and so if you send a note using Twitter, it needs to be a very short note. We could, on iOS, prevent Twitter from being permitted, but there's no way to prevent this on Android. Even then, there's no real reason not to prevent Twitter from being an option. The user just needs to be familiar enough with the social network to know that they'll have to edit the content before posting it. Also, don't forget that the subject of a message only applies to mail; most other social networks will ignore it. If something is critical, be sure to include it in the text of the message, and not the subject only. Modifying the image note edit view The image note edit view presents an additional difficulty: we can't put the Share button in a toolbar. This is because doing so will cause positioning difficulties with TEXTAREA and the toolbar when the soft keyboard is visible. Instead, we'll put it in the lower-right corner of the image. This is done by using the same technique we used to outline the camera button. Getting on with it Let's edit the template in www/html/imageNoteEditView.html; again, I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-           button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar">       <div class="image-container">         <div class="ui-glyph ui-background-tint-color ui-glyphcamera-         outline"></div>               <div class="ui-glyph ui-background-tint-color ui-glyph-           camera outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           camera non-outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share non-outline share-button"></div>       </div>       <textarea class="ui-text-box"         onblur="this.classList.remove('editing');"         onfocus="this.classList.add('editing');         ">%NOTE_CONTENTS%</textarea>     </div>   </body> </html> Because sharing an image requires a little additional code, we need to override shareNote (which we inherit from the prior task) in www/js/app/views/imageNoteEditView.js: self.shareNote = function () {   var fm = noteStorageSingleton.fileManager;   var nativePath = fm.getNativeURL ( self._note.mediaContents );   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   if (self._note.unitValue > 0) {     message.image = nativePath;   }   window.socialmessage.send ( message ); } Finally, we need to add the following styles to www/css/style.css: div.ui-glyph.ui-background-tint-color.ui-glyph-share.outline, div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   left:inherit;   width:50px;   top: inherit;   height:50px; } {   -webkit-mask-position:15px 16px;   mask-position:15px 16px; } div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   -webkit-mask-position:15px 15px;   mask-position:15px 15px; } What did we do? Like the previous task, we first modified the template to add the share icon. Then, we added the shareNote code to the view (note that we don't have to add anything to find the button, because we inherit it from the Text Note Edit View). Finally, we modify the style sheet to reposition the Share button appropriately so that it looks like the following screenshot: What else do I need to know? The image needs to be a valid image, or the plugin may crash. This is why we check for the value of unitValue in shareNote to ensure that it is large enough to attach to the message. If not, we only share the text. Game Over... Wrapping it up And that's it! You've learned how to respond to device events, and you've also added sharing to text and image notes by using a third-party plugin. Can you take the HEAT? The Hotshot Challenge There are several ways to improve the project. Why don't you try a few? Implement the ability to save the note when the app receives a pause event, and then restore the note when the app is resumed. Remember which note is visible when the app is paused, and restore it when the app is resumed. (Hint: localStorage may come in handy.) Add video or audio sharing. You'll probably have to alter the sharing plugin or find another (or an additional) plugin. You'll probably also need to upload the data to an external server so that it can be linked via the social network. For example, it's often customary to link to a video on Twitter by using a link shortener. The File Transfer plugin might come in handy for this challenge (https://github.com/apache/cordova-plugin-file-transfer/blob/dev/doc/index.md). Summary This article introduced you to a third-party plugin that provides access to e-mail and various social networks. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article] Configuring the ChildBrowser plugin [Article] Using Location Data with PhoneGap [Article]
Read more
  • 0
  • 0
  • 3837
article-image-building-private-app
Packt
23 May 2014
14 min read
Save for later

Building a Private App

Packt
23 May 2014
14 min read
(For more resources related to this topic, see here.) Even though the app will be simple and only take a few hours to build, we'll still use good development practices to ensure we create a solid foundation. There are many different approaches to software development and discussing even a fraction of them is beyond the scope of this book. Instead, we'll use a few common concepts, such as requirements gathering, milestones, Test-Driven Development (TDD), frequent code check-ins, and appropriate commenting/documentation. Personal discipline in following development procedures is one of the best things a developer can bring to a project; it is even more important than writing code. This article will cover the following topics: The structure of the app we'll be building The development process Working with the Shopify API Using source control Deploying to production Signing up for Shopify Before we dive back into code, it would be helpful to get the task of setting up a Shopify store out of the way. Sign up as a Shopify partner by going to http://partners.shopify.com. The benefit of this is that partners can provision stores that can be used for testing. Go ahead and make one now before reading further. Keep your login information close at hand; we'll need it in just a moment. Understanding our workflow The general workflow for developing our application is as follows: Pull down the latest version of the master branch. Pick a feature to implement from our requirements list. Create a topic branch to keep our changes isolated. Write tests that describe the behavior desired by our feature. Develop the code until it passes all the tests. Commit and push the code into the remote repository. Pull down the latest version of the master branch and merge it with our topic branch. Run the test suite to ensure that everything still works. Merge the code back with the master branch. Commit and push the code to the remote repository. The previous list should give you a rough idea of what is involved in a typical software project involving multiple developers. The use of topic branches ensures that our work in progress won't affect other developers (called breaking the build) until we've confirmed that our code has passed all the tests and resolved any conflicts by merging in the latest stable code from the master branch. The practical upside of this methodology is that it allows bug fixes or work from another developer to be added to the project at any time without us having to worry about incomplete code polluting the build. This also gives us the ability to deploy production from a stable code base. In practice, a lot of projects will also have a production branch (or tagged release) that contains a copy of the code currently running in production. This is primarily in case of a server failure so that the application can be restored without having to worry about new features being released ahead of schedule, and secondly so that if a new deploy introduces bugs, it can easily be rolled back. Building the application We'll be building an application that allows Shopify storeowners to organize contests for their shoppers and randomly select a winner. Contests can be configured based on purchase history and timeframe. For example, a contest could be organized for all the customers who bought the newest widget within the last three days, or anyone who has made an order for any product in the month of August. To accomplish this, we'll need to be able to pull down order information from the Shopify store, generate a random winner, and show the storeowner the results. Let's start out by creating a list of requirements for our application. We'll use this list to break our development into discrete pieces so we can easily measure our progress and also keep our focus on the important features. Of course, it's difficult to make a complete list of all the requirements and have it stick throughout the development process, which is why a common strategy is to develop in iterations (or sprints). The result of an iteration is a working app that can be reviewed by the client so that the remaining features can be reprioritized if necessary. High-level requirements The requirements list comprises all the tasks we're going to accomplish in this article. The end result will be an application that we can use to run a contest for a single Shopify store. Included in the following list are any related database, business logic, and user interface coding necessary. Install a few necessary gems. Store Shopify API credentials. Connect to Shopify. Retrieve order information from Shopify. Retrieve product information from Shopify. Clean up the UI. Pick a winner from a list. Create contests. Now that we have a list of requirements, we can treat each one as a sprint. We will work in a topic branch and merge our code to the master branch at the end of the sprint. Installing a few necessary gems The first item on our list is to add a few code libraries (gems) to our application. Let's create a topic branch and do just that. To avoid confusion over which branch contains code for which feature, we can start the branch name with the requirement number. We'll additionally prepend the chapter number for clarity, so our format will be <chapter #>_<requirement #>_<branch name>. Execute the following command line in the root folder of the app: git checkout -b ch03_01_gem_updates This command will create a local branch called ch03_01_gem_updates that we will use to isolate our code for this feature. Once we've installed all the gems and verified that the application runs correctly, we'll merge our code back with the master branch. At a minimum we need to install the gems we want to use for testing. For this app we'll use RSpec. We'll need to use the development and test group to make sure the testing gems aren't loaded in production. Add the following code in bold to the block present in the Gemfile: group :development, :test do gem "sqlite3" # Helpful gems gem "better_errors" # improves error handling gem "binding_of_caller" # used by better errors # Testing frameworks gem 'rspec-rails' # testing framework gem "factory_girl_rails" # use factories, not fixtures gem "capybara" # simulate browser activity gem "fakeweb" # Automated testing gem 'guard' # automated execution of test suite upon change gem "guard-rspec" # guard integration with rspec # Only install the rb-fsevent gem if on Max OSX gem 'rb-fsevent' # used for Growl notifications end Now we need to head over to the terminal and install the gems via Bundler with the following command: bundle install The next step is to install RSpec: rails generate rspec:install The final step is to initialize Guard: guard init rspec This will create a Guard file, and fill it with the default code needed to detect the file changes. We can now restart our Rails server and verify that everything works properly. We have to do a full restart to ensure that any initialization files are properly picked up. Once we've ensured that our page loads without issue, we can commit our code and merge it back with the master branch: git add --all git commit -am "Added gems for testing" git checkout master git merge ch03_01_gem_updates git push Great! We've completed our first requirement. Storing Shopify API credentials In order to access our test store's API, we'll need to create a Private App and store the provided credentials there for future use. Fortunately, Shopify makes this easy for us via the Admin UI: Go to the Apps page. At the bottom of the page, click on the Create a private API key… link. Click on the Generate new Private App button. We'll now be provided with three important pieces of information: the API Key, password, and shared secret. In addition, we can see from the example URL field that we need to track our Shopify URL as well. Now that we have credentials to programmatically access our Shopify store, we can save this in our application. Let's create a topic branch and get to work: git checkout -b ch03_02_shopify_credentials Rails offers a generator called a scaffold that will create the database migration model, controller, view files, and test stubs for us. Run the following from the command line to create the scaffold for the Account vertical (make sure it is all on one line): rails g scaffold Account shopify_account_url:string shopify_api_key:string shopify_password:string shopify_shared_secret:string We'll need to run the database migration to create the database table using the following commands: bundle exec rake db:migrate bundle exec rake db:migrate RAILS_ENV=test Use the following command to update the generated view files to make them bootstrap compatible: rails g bootstrap:themed Accounts -f Head over to http://localhost:3000/accounts and create a new account in our app that uses the Shopify information from the Private App page. It's worth getting Guard to run our test suite every time we make a change so we can ensure that we don't break anything. Open up a new terminal in the root folder of the app and start up Guard: bundle exec guard After booting up, Guard will automatically run all our tests. They should all pass because we haven't made any changes to the generated code. If they don't, you'll need to spend time sorting out any failures before continuing. The next step is to make the app more user friendly. We'll make a few changes now and leave the rest for you to do later. Update the layout file so it has accurate navigation. Boostrap created several dummy links in the header navigation and sidebar. Update the navigation list in /app/views/layouts/application.html.erb to include the following code: <a class="brand" href="/">Contestapp</a> <div class="container-fluid nav-collapse"> <ul class="nav"> <li><%= link_to "Accounts", accounts_path%></li> </ul> </div><!--/.nav-collapse --> Add validations to the account model to ensure that all fields are required when creating/updating an account. Add the following lines to /app/models/account.rb: validates_presence_of :shopify_account_url validates_presence_of :shopify_api_key validates_presence_of :shopify_password validates_presence_of :shopify_shared_secret This will immediately cause the controller tests to fail due to the fact that it is not passing in all the required fields when attempting to submit the created form. If you look at the top of the file, you'll see some code that creates the :valid_attributes hash. If you read the comment above it, you'll see that we need to update the hash to contain the following minimally required fields: # This should return the minimal set of attributes required # to create a valid Account. As you add validations to # Account, be sure to adjust the attributes here as well. let(:valid_attributes) { { "shopify_account_url" => "MyString", "shopify_password" => "MyString", "shopify_api_ key" => "MyString", "shopify_shared_secret" => "MyString" } } This is a prime example of why having a testing suite is important. It keeps us from writing code that breaks other parts of the application, or in this case, helps us discover a weakness we might not have known we had: the ability to create a new account record without filling in any fields! Now that we have satisfied this requirement and all our tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Account model and related files" git checkout master git merge ch03_02_shopify_credentials git push Excellent! We've now completed another critical piece! Connecting to Shopify Now that we have a test store to work with, we're ready to implement the code necessary to connect our app to Shopify. First, we need to create a topic branch: git checkout -b ch03_03_shopify_connection We are going to use the official Shopify gem to connect our app to our test store, as well as interact with the API. Add this to the Gemfile under the gem 'bootstrap-sass' line: gem 'shopify_api' Update the bundle from the command line: bundle install We'll also need to restart Guard in order for it to pick up the new gem. This is typically done by using a key combination like Ctrl + Z< (Windows) or Cmd + C (Mac OS X) or by typing the word exit and pressing the Enter key. I've written a class that encapsulates the Shopify connection logic and initializes the global ShopifyAPI class that we can then use to interact with the API. You can find the code for this class in ch03_shopify_integration.rb. You'll need to copy the contents of this file to your app in a new file located at /app/services/shopify_integration.rb. The contents of the spec file ch03_shopify_integration_spec.rb need to be pasted in a new file located at /spec/services/shopify_integration_spec.rb. Using this class will allow us to execute something like ShopifyAPI::Order.find(:all) to get a list of orders, or ShopifyAPI::Product.find(1234) to retrieve the product with the ID 1234. The spec file contains tests for functionality that we haven't built yet and will initially fail. We'll fix this soon! We are going to add a Test Connection button to the account page that will give the user instant feedback as to whether or not the credentials are valid. Because we will be adding a new action to our application, we will need to first update controller, request, routing, and view tests before proceeding. Given the nature of this article and because in this case, we're connecting to an external service, the topics such as mocking, test writing, and so on will have to be reviewed as homework. I recommend watching the excellent screencasts created by Ryan Bates at http://railscasts.com as a primer on testing in Rails. The first step is to update the resources :accounts route in the /config/routes.rb file with the following block: resources :accounts do member do get 'test_connection' end end Copy the controller code from ch03_accounts_controller.rb and replace the code in /app/controllers/accounts_controller.rb file. This new code adds the test_connection method as well as ensuring the account is loaded properly. Finally, we need to add a button to /app/views/account/show.html.erb that will call this action in div.form-actions: <%= link_to "Test Connection",test_connection_account_path(@account), :class => 'btn' %> If we view the account page in our browser, we can now test our Shopify integration. Assuming that everything was copied correctly, we should see a success message after clicking on the Test Connection button. If everything was not copied correctly, we'll see the message that the Shopify API returned to us as a clue to what isn't working. Once all the tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Shopify connection and related UI" git checkout master git merge ch03_03_shopify_connection git push Having fun? Good, because things are about to get heavy. Summary: As you can see and understand this article explains briefly about, the integration with Shopify's API in order to retrieve product and order information from the shop. The UI is then streamlined a bit before the logic to create a contest is created. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 11262

article-image-ruby-and-metasploit-modules
Packt
23 May 2014
11 min read
Save for later

Ruby and Metasploit Modules

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Reinventing Metasploit Consider a scenario where the systems under the scope of the penetration test are very large in number, and we need to perform a post-exploitation function such as downloading a particular file from all the systems after exploiting them. Downloading a particular file from each system manually will consume a lot of time and will be tiring as well. Therefore, in a scenario like this, we can create a custom post-exploitation script that will automatically download a file from all the systems that are compromised. This article focuses on building programming skill sets for Metasploit modules. This article kicks off with the basics of Ruby programming and ends with developing various Metasploit modules. In this article, we will cover the following points: Understanding the basics of Ruby programming Writing programs in Ruby programming Exploring modules in Metasploit Writing your own modules and post-exploitation modules Let's now understand the basics of Ruby programming and gather the required essentials we need to code Metasploit modules. Before we delve deeper into coding Metasploit modules, we must know the core features of Ruby programming that are required in order to design these modules. However, why do we require Ruby for Metasploit? The following key points will help us understand the answer to this question: Constructing an automated class for reusable code is a feature of the Ruby language that matches the needs of Metasploit Ruby is an object-oriented style of programming Ruby is an interpreter-based language that is fast and consumes less development time Earlier, Perl used to not support code reuse Ruby – the heart of Metasploit Ruby is indeed the heart of the Metasploit framework. However, what exactly is Ruby? According to the official website, Ruby is a simple and powerful programming language. Yokihiru Matsumoto designed it in 1995. It is further defined as a dynamic, reflective, and general-purpose object-oriented programming language with functions similar to Perl. You can download Ruby for Windows/Linux from http://rubyinstaller.org/downloads/. You can refer to an excellent resource for learning Ruby practically at http://tryruby.org/levels/1/challenges/0. Creating your first Ruby program Ruby is an easy-to-learn programming language. Now, let's start with the basics of Ruby. However, remember that Ruby is a vast programming language. Covering all the capabilities of Ruby will push us beyond the scope of this article. Therefore, we will only stick to the essentials that are required in designing Metasploit modules. Interacting with the Ruby shell Ruby offers an interactive shell too. Working on the interactive shell will help us understand the basics of Ruby clearly. So, let's get started. Open your CMD/terminal and type irb in it to launch the Ruby interactive shell. Let's input something into the Ruby shell and see what happens; suppose I type in the number 2 as follows: irb(main):001:0> 2 => 2 The shell throws back the value. Now, let's give another input such as the addition operation as follows: irb(main):002:0> 2+3 => 5 We can see that if we input numbers using an expression style, the shell gives us back the result of the expression. Let's perform some functions on the string, such as storing the value of a string in a variable, as follows: irb(main):005:0> a= "nipun" => "nipun" irb(main):006:0> b= "loves metasploit" => "loves metasploit" After assigning values to the variables a and b, let's see what the shell response will be when we write a and a+b on the shell's console: irb(main):014:0> a => "nipun" irb(main):015:0> a+b => "nipunloves metasploit" We can see that when we typed in a as an input, it reflected the value stored in the variable named a. Similarly, a+b gave us back the concatenated result of variables a and b. Defining methods in the shell A method or function is a set of statements that will execute when we make a call to it. We can declare methods easily in Ruby's interactive shell, or we can declare them using the script as well. Methods are an important aspect when working with Metasploit modules. Let's see the syntax: def method_name [( [arg [= default]]...[, * arg [, &expr ]])] expr end To define a method, we use def followed by the method name, with arguments and expressions in parentheses. We also use an end statement following all the expressions to set an end to the method definition. Here, arg refers to the arguments that a method receives. In addition, expr refers to the expressions that a method receives or calculates inline. Let's have a look at an example: irb(main):001:0> def week2day(week) irb(main):002:1> week=week*7 irb(main):003:1> puts(week) irb(main):004:1> end => nil We defined a method named week2day that receives an argument named week. Further more, we multiplied the received argument with 7 and printed out the result using the puts function. Let's call this function with an argument with 4 as the value: irb(main):005:0> week2day(4) 28 => nil We can see our function printing out the correct value by performing the multiplication operation. Ruby offers two different functions to print the output: puts and print. However, when it comes to the Metasploit framework, the print_line function is used. Variables and data types in Ruby A variable is a placeholder for values that can change at any given time. In Ruby, we declare a variable only when we need to use it. Ruby supports numerous variables' data types, but we will only discuss those that are relevant to Metasploit. Let's see what they are. Working with strings Strings are objects that represent a stream or sequence of characters. In Ruby, we can assign a string value to a variable with ease as seen in the previous example. By simply defining the value in quotation marks or a single quotation mark, we can assign a value to a string. It is recommended to use double quotation marks because if single quotations are used, it can create problems. Let's have a look at the problem that may arise: irb(main):005:0> name = 'Msf Book' => "Msf Book" irb(main):006:0> name = 'Msf's Book' irb(main):007:0' ' We can see that when we used a single quotation mark, it worked. However, when we tried to put Msf's instead of the value Msf, an error occurred. This is because it read the single quotation mark in the Msf's string as the end of single quotations, which is not the case; this situation caused a syntax-based error. The split function We can split the value of a string into a number of consecutive variables using the split function. Let's have a look at a quick example that demonstrates this: irb(main):011:0> name = "nipun jaswal" => "nipun jaswal" irb(main):012:0> name,surname=name.split(' ') => ["nipun", "jaswal"] irb(main):013:0> name => "nipun" irb(main):014:0> surname => "jaswal" Here, we have split the value of the entire string into two consecutive strings, name and surname by using the split function. However, this function split the entire string into two strings by considering the space to be the split's position. The squeeze function The squeeze function removes extra spaces from the given string, as shown in the following code snippet: irb(main):016:0> name = "Nipun Jaswal" => "Nipun Jaswal" irb(main):017:0> name.squeeze => "Nipun Jaswal" Numbers and conversions in Ruby We can use numbers directly in arithmetic operations. However, remember to convert a string into an integer when working on user input using the .to_i function. Simultaneously, we can convert an integer number into a string using the .to_s function. Let's have a look at some quick examples and their output: irb(main):006:0> b="55" => "55" irb(main):007:0> b+10 TypeError: no implicit conversion of Fixnum into String from (irb):7:in `+' from (irb):7 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):008:0> b.to_i+10 => 65 irb(main):009:0> a=10 => 10 irb(main):010:0> b="hello" => "hello" irb(main):011:0> a+b TypeError: String can't be coerced into Fixnum from (irb):11:in `+' from (irb):11 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):012:0> a.to_s+b => "10hello" We can see that when we assigned a value to b in quotation marks, it was considered as a string, and an error was generated while performing the addition operation. Nevertheless, as soon as we used the to_i function, it converted the value from a string into an integer variable, and addition was performed successfully. Similarly, with regards to strings, when we tried to concatenate an integer with a string, an error showed up. However, after the conversion, it worked. Ranges in Ruby Ranges are important aspects and are widely used in auxiliary modules such as scanners and fuzzers in Metasploit. Let's define a range and look at the various operations we can perform on this data type: irb(main):028:0> zero_to_nine= 0..9 => 0..9 irb(main):031:0> zero_to_nine.include?(4) => true irb(main):032:0> zero_to_nine.include?(11) => false irb(main):002:0> zero_to_nine.each{|zero_to_nine| print(zero_to_nine)} 0123456789=> 0..9 irb(main):003:0> zero_to_nine.min => 0 irb(main):004:0> zero_to_nine.max => 9 We can see that a range offers various operations such as searching, finding the minimum and maximum values, and displaying all the data in a range. Here, the include? function checks whether the value is contained in the range or not. In addition, the min and max functions display the lowest and highest values in a range. Arrays in Ruby We can simply define arrays as a list of various values. Let's have a look at an example: irb(main):005:0> name = ["nipun","james"] => ["nipun", "james"] irb(main):006:0> name[0] => "nipun" irb(main):007:0> name[1] => "james" So, up to this point, we have covered all the required variables and data types that we will need for writing Metasploit modules. For more information on variables and data types, refer to the following link: http://www.tutorialspoint.com/ruby/ Refer to a quick cheat sheet for using Ruby programming effectively at the following links: https://github.com/savini/cheatsheets/raw/master/ruby/RubyCheat.pdf http://hyperpolyglot.org/scripting Methods in Ruby A method is another name for a function. Programmers with a different background than Ruby might use these terms interchangeably. A method is a subroutine that performs a specific operation. The use of methods implements the reuse of code and decreases the length of programs significantly. Defining a method is easy, and their definition starts with the def keyword and ends with the end statement. Let's consider a simple program to understand their working, for example, printing out the square of 50: def print_data(par1) square = par1*par1 return square end answer=print_data(50) print(answer) The print_data method receives the parameter sent from the main function, multiplies it with itself, and sends it back using the return statement. The program saves this returned value in a variable named answer and prints the value. Decision-making operators Decision making is also a simple concept as with any other programming language. Let's have a look at an example: irb(main):001:0> 1 > 2 => false irb(main):002:0> 1 < 2 => true Let's also consider the case of string data: irb(main):005:0> "Nipun" == "nipun" => false irb(main):006:0> "Nipun" == "Nipun" => true Let's consider a simple program with decision-making operators: #Main num = gets num1 = num.to_i decision(num1) #Function def decision(par1) print(par1) par1= par1 if(par1%2==0) print("Number is Even") else print("Number is Odd") end end We ask the user to enter a number and store it in a variable named num using gets. However, gets will save the user input in the form of a string. So, let's first change its data type to an integer using the to_i method and store it in a different variable named num1. Next, we pass this value as an argument to the method named decision and check whether the number is divisible by two. If the remainder is equal to zero, it is concluded that the number is divisible by true, which is why the if block is executed; if the condition is not met, the else block is executed. The output of the preceding program will be something similar to the following screenshot when executed in a Windows-based environment:
Read more
  • 0
  • 0
  • 13005

article-image-setting-photoreal-rendering
Packt
23 May 2014
6 min read
Save for later

Setting Up for Photoreal Rendering

Packt
23 May 2014
6 min read
(For more resources related to this topic, see here.) Rendering process The following are the steps of the SketchUp and Thea rendering process. This process will work for other renderers, too, and is a good way of structuring your workflow, because you achieve great results in little time. For example, why find out a material hasn't mapped at the right scale only after an hour-long render? With the following process, you will find that out in a few minutes with a quick test render. Step 1: Preparing the SketchUp model Step 2: Performing an initial test render Step 3: Assigning materials Step 4: Defining lighting Step 5: Inserting extra entourage Step 6: Production rendering Step 7: Postproduction rendering We are going to look at each of these steps in detail using a small test room and an atrium scene. The test room is a variation of the famous Cornell Box, a well-established test setup to evaluate rendering algorithms. You can also use any scene that you have set up yourself in SketchUp. Thea for SketchUp interface The main interface for working with Thea in SketchUp is the Thea for SketchUp plugin. The plugin gives you access to all features that are necessary to set up and render a scene out of SketchUp. This is beneficial for a quick workflow, because you don't have to switch between SketchUp and the Thea Studio application during the setup of your scene. The Thea plugin is accessible by navigating to Plugins | Thea Render. It is split into two main windows: the Thea Tool window and the Thea Rendering Window. There is also a toolbar with two buttons to enable each of these windows. The Thea Tool window We will use the buttons and controls of the tool window to set up the camera options and add materials and other elements that will be specially treated during the export (such as lights). The window is structured in four panels (or tabs) that you can see in the following image: When the Thea Tool is active, you will also see a red frame displayed in the SketchUp 3D window that shows the field of view for the current Thea camera. This either frames the area that will be rendered or indicates that the camera view is wider than the SketchUp window via arrows to the left and right. You will also notice that the SketchUp cursor changes when you have the Thea Tool window open. Depending on the current tool in use, there are a number of different cursor shapes. The default is a small cross and is used to select SketchUp materials for the material assignment. The Thea Rendering Window The main window is the place where you can set up, control, and view the rendering process itself. The window has a large area at the top where the rendered image is displayed. Below is a row of buttons to save and refresh rendered images and to start, pause, and stop the current render. You can see a screenshot of the rendering window with an open scene in the following image: At the bottom of the window, you will find the control elements for the rendering and display of images and the advanced scene options. Step 1 – Preparing the SketchUp model To follow the steps in this article, you can download the Cornell Box SketchUp file from Packt Support. In this demo scene, the model is already prepared for 3D rendering, and there are scenes set up to follow the steps below. If you imported our test box from the 3D Warehouse (search for Cornell Box) or used your own SketchUp model, you should do some preparations before you can start rendering: Create a copy of your SketchUp file just for rendering. You will create new scenes and content dedicated to the rendering process, and you can save all this in a dedicated SketchUp file. Remove unused content. Check the model for hidden geometry, unused and obsolete scene tabs and layers, and especially materials. Purge the material list. Identify the important materials in the scene, especially those that are used for windows and nearby large objects. Double-check the orientation of surfaces that you want to convert to emitting surfaces or to which you want to apply a displacement map. Create and save important camera viewpoints in scene tabs. Step 2 – Performing an initial test render You are now going to do a test render to see how the model looks like when rendered in another renderer. You can use this first test to identify areas that need more modeling or parts where you should add textures to make them more interesting. In SketchUp, select the view you want to render. If you use the Cornell scene from the Packt Publishing website, select the Step 2 scene. If you imported the Cornell box from the 3D Warehouse, select one side of the box and hide it so that you can see the objects inside. Open the Thea Tool window by navigating to Plugins | Thea Render | Thea Tool. Check that the resolution on the camera tab is set to 800 x 600 pixels, and the aspect ratio is set to 4:3. These are the defaults. Open the Thea render window by navigating to Plugins | Thea Render | Thea Rendering Window. In the settings area of the Thea window, select Adaptive (BSD) as the render mode. Then, check that the Preset selection is set to 00. Exterior Preview. Click on the Start button. Your scene will render, and you should see the progress in the image area of the window. The rendering should complete in less than a minute (depending on your computer performance, of course). The final image will look something like the following image: Review the rendered image, and pay attention to the following aspects: Check that all the textures are in place correctly. Make a note of where the textures are missing or distorted. (Note that there are no textures in our scene yet.) Look out for the quality of the geometry details, especially on rounded corners and smooth surfaces such as the teapot and sphere in our scene. Search for areas in the image that look unnaturally "empty". You should consider adding further geometry or a surface texture to make these areas more visually interesting. Also, look at the distribution of the objects in your scene. Remember that this is a 2D representation of your room that should look well balanced without the knowledge of the 3D model behind it. If you have found any issues with your scene, you should now go back into SketchUp and make some corrections. This is the export-check loop, which you may have to repeat a few times. The more you get used to SketchUp and your rendering application, the less you will need to do this. However, for now, there is a lot to learn by performing this exercise, so the time is well spent. Summary In this article, we got a glimpse of the ways in which we can use tools such as Thea for performing photorealstic rendering on our scenes. Resources for Article: Further resources on this subject: Walkthrough Tools within SketchUp 7.1 [Article] Diving Straight into Photographic Rendering [Article] Case study – modifying a GoPro wrench [Article]
Read more
  • 0
  • 0
  • 3146
article-image-ranges
Packt
22 May 2014
11 min read
Save for later

Ranges

Packt
22 May 2014
11 min read
(For more resources related to this topic, see here.) Sorting ranges efficiently Phobos' std.algorthm includes sorting algorithms. Let's look at how they are used, what requirements they have, and the dangers of trying to implement range primitives without minding their efficiency requirements. Getting ready Let's make a linked list container that exposes an appropriate view, a forward range, and an inappropriate view, a random access range that doesn't meet its efficiency requirements. A singly-linked list can only efficiently implement forward iteration due to its nature; the only tool it has is a pointer to the next element. Implementing any other range primitives will require loops, which is not recommended. Here, however, we'll implement a fully functional range, with assignable elements, length, bidirectional iteration, random access, and even slicing on top of a linked list to see the negative effects this has when we try to use it. How to do it… We're going to both sort and benchmark this program. To sort Let's sort ranges by executing the following steps: Import std.algorithm. Determine the predicate you need. The default is (a, b) => a < b, which results in an ascending order when the sorting is complete (for example, [1,2,3]). If you want ascending order, you don't have to specify a predicate at all. If you need descending order, you can pass a greater-than predicate instead, as shown in the following line of code: auto sorted = sort!((a, b) => a > b)([1,2,3]); // results: [3,2,1] When doing string comparisons, the functions std.string.cmp (case-sensitive) or std.string.icmp (case-insensitive) may be used, as is done in the following code: auto sorted = sort!((a, b) => cmp(a, b) < 0)(["b", "c", "a"]); // results: a, b, c Your predicate may also be used to sort based on a struct member, as shown in the following code: auto sorted = sort!((a, b) => a.value < b.value)(structArray); Pass the predicate as the first compile-time argument. The range you want to sort is passed as the runtime argument. If your range is not already sortable (if it doesn't provide the necessary capabilities), you can convert it to an array using the array function from std.range, as shown in the following code: auto sorted = sort(fibanocci().take(10)); // won't compile, not enough capabilities auto sorted = sort(fibanocci().take(10).array); // ok, good Use the sorted range. It has a unique type from the input to signify that it has been successfully sorted. Other algorithms may use this knowledge to increase their efficiency. To benchmark Let's sort objects using benchmark by executing the following steps: Put our range and skeleton main function from the Getting ready section of this recipe into a file. Use std.datetime.benchmark to test the sorting of an array from the appropriate walker against the slow walker and print the results at the end of main. The code is as follows: auto result = benchmark!( { auto sorted = sort(list.walker.array); }, { auto sorted = sort(list.slowWalker); } )(100); writefln("Emulation resulted in a sort that was %d times slower.", result[1].hnsecs / result[0].hnsecs); Run it. Your results may vary slightly, but you'll see that the emulated, inappropriate range functions are consistently slower. The following is the output: Emulation resulted in a sort that was 16 times slower. Tweak the size of the list by changing the initialization loop. Instead of 1000 entries, try 2000 entries. Also, try to compile the program with inlining and optimization turned on (dmd –inline –O yourfile.d) and see the difference. The emulated version will be consistently slower, and as the list becomes longer, the gap will widen. On my computer, a growing list size led to a growing slowdown factor, as shown in the following table: List size Slowdown factor 500 13 1000 16 2000 29 4000 73 How it works… The interface to Phobos' main sort function hides much of the complexity of the implementation. As long as we follow the efficiency rules when writing our ranges, things either just work or fail, telling us we must call an array in the range before we can sort it. Building an array has a cost in both time and memory, which is why it isn't performed automatically (std.algorithm prefers lazy evaluation whenever possible for best speed and minimum memory use). However, as you can see in our benchmark, building an array is much cheaper than emulating unsupported functions. The sort algorithms require a full-featured range and will modify the range you pass to it instead of allocating memory for a copy. Thus, the range you pass to it must support random access, slicing, and either assignable or swappable elements. The prime example of such a range is a mutable array. This is why it is often necessary to use the array function when passing data to sort. Our linked list code used static if with a compile-time parameter as a configuration tool. The implemented functions include opSlice and properties that return ref. The ref value can only be used on function return values or parameters. Assignments to a ref value are forwarded to the original item. The opSlice function is called when the user tries to use the slice syntax: obj[start .. end]. Inside the beSlow condition, we broke the main rule of implementing range functions: avoid loops. Here, we see the consequences of breaking that rule; it ruined algorithm restrictions and optimizations, resulting in code that performs very poorly. If we follow the rules, we at least know where a performance problem will arise and can handle it gracefully. For ranges that do not implement the fast length property, std.algorithm includes a function called walkLength that determines the length by looping through all items (like we did in the slow length property). The walkLength function has a longer name than length precisely to warn you that it is a slower function, running in O(n) (linear with length) time instead of O(1) (constant) time. Slower functions are OK, they just need to be explicit so that the user isn't surprised. See also The std.algorithm module also includes other sorting algorithms that may fit a specific use case better than the generic (automatically specialized) function. See the documentation at http://dlang.org/phobos/std_algorithm.html for more information. Searching ranges Phobos' std.algorithm module includes search functions that can work on any ranges. It automatically specializes based on type information. Searching a sorted range is faster than an unsorted range. How to do it… Searching has a number of different scenarios, each with different methods: If you want to know if something is present, use canFind. Finding an item generically can be done with the find function. It returns the remainder of the range, with the located item at the front. When searching for a substring in a string, you can use haystack.find(boyerMooreFinder(needle)). This uses the Boyer-Moore algorithm which may give better performance. If you want to know the index where the item is located, use countUntil. It returns a numeric index into the range, just like the indexOf function for strings. Each find function can take a predicate to customize the search operation. When you know your range is sorted but the type doesn't already prove it, you may call assumeSorted on it before passing it to the search functions. The assumeSorted function has no runtime cost; it only adds information to the type that is used for compile-time specialization. How it works… The search functions in Phobos make use of the ranges' available features to choose good-fit algorithms. Pass them efficiently implemented ranges with accurate capabilities to get best performance. The find function returns the remainder of the data because this is the most general behavior; it doesn't need random access, like returning an index, and doesn't require an additional function if you are implementing a function to split a range on a given condition. The find function can work with a basic input range, serving as a foundation to implement whatever you need on top of it, and it will transparently optimize to use more range features if available. Using functional tools to query data The std.algorithm module includes a variety of higher-order ranges that provide tools similar to functional tools. Here, we'll see how D code can be similar to a SQL query. A SQL query is as follows: SELECT id, name, strcat("Title: ", title) FROM users WHERE name LIKE 'A%' ORDER BY id DESC LIMIT 5; How would we express something similar in D? Getting ready Let's create a struct to mimic the data table and make an array with the some demo information. The code is as follows: struct User { int id; string name; string title; } User[] users; users ~= User(1, "Alice", "President"); users ~= User(2, "Bob", "Manager"); users ~= User(3, "Claire", "Programmer"); How to do it… Let's use functional tools to query data by executing the following steps: Import std.algorithm. Use sort to translate the ORDER BY clause. If your dataset is large, you may wish to sort it at the end. This will likely require a call to an array, but it will only sort the result set instead of everything. With a small dataset, sorting early saves an array allocation. Use filter to implement the WHERE clause. Use map to implement the field selection and functions. The std.typecons.tuple module can also be used to return specific fields. Use std.range.take to implement the LIMIT clause. Put it all together and print the result. The code is as follows: import std.algorithm; import std.range; import std.typecons : tuple; // we use this below auto resultSet = users. sort!((a, b) => a.id > b.id). // the ORDER BY clause filter!((item) => item.name.startsWith("A")). // the WHERE clause take(5). map!((item) => tuple(item.id, item.name,"Title: " ~ item.title)); // the field list and transformations import std.stdio; foreach(line; resultSet) writeln(line[0], " ", line[1], " ", line[2]); It will print the following output: 1 Alice Title: President How it works… Many SQL operations or list comprehensions can be expressed in D using some building blocks from std.algorithm. They all work generally the same way; they take a predicate as a compile-time argument. The predicate is passed one or two items at a time and you perform a check or transformation on it. Chaining together functions with the dot syntax, like we did here, is possible thanks to uniform function call syntax. It could also be rewritten as take(5, filter!pred(map!pred(users))). It depends on author's preference, as both styles work exactly the same way. It is important to remember that all std.algorithm higher-order ranges are evaluated lazily. This means no computations, such as looping over or printing, are actually performed until they are required. Writing code using filter, take, map, and many other functions is akin to preparing a query. To execute it, you may print or loop the result, or if you want to save it to an array for use later, simply call .array at the end. There's more… The std.algorithm module also includes other classic functions, such as reduce. It works the same way as the others. D has a feature called pure functions. The functions in std.algorithm are conditionally pure, which means they can be used in pure functions if and only if the predicates you pass are also pure. With lambda functions, like we've been using here, the compiler will often automatically deduce this for you. If you use other functions you define as predicates and want to use it in a pure function, be sure to mark them pure as well. See also Visit http://dconf.org/2013/talks/wilson.html where Adam Wilson's DConf 2013 talk on porting C# to D showed how to translate some real-world LINQ code to D Summary In this article, we learned how to sort ranges in an efficient manner by using sorting algorithms. We learned how to search a range using different functions. We also learned how to use functional tools to query data (similar to a SQL query). Resources for Article: Further resources on this subject: Watching Multiple Threads in C# [article] Application Development in Visual C++ - The Tetris Application [article] Building UI with XAML for Windows 8 Using C [article]
Read more
  • 0
  • 0
  • 2735

Packt
22 May 2014
13 min read
Save for later

A/B Testing – Statistical Experiments for the Web

Packt
22 May 2014
13 min read
(For more resources related to this topic, see here.) Defining A/B testing At its most fundamental level, A/B testing just involves creating two different versions of a web page. Sometimes, the changes are major redesigns of the site or the user experience, but usually, the changes are as simple as changing the text on a button. Then, for a short period of time, new visitors are randomly shown one of the two versions of the page. The site tracks their behavior, and the experiment determines whether one version or the other increases the users' interaction with the site. This may mean more click-through, more purchases, or any other measurable behavior. This is similar to other methods in other domains that use different names. The basic framework randomly tests two or more groups simultaneously and is sometimes called random-controlled experiments or online-controlled experiments. It's also sometimes referred to as split testing, as the participants are split into two groups. These are all examples of between-subjects experiment design. Experiments that use these designs all split the participants into two groups. One group, the control group, gets the original environment. The other group, the test group, gets the modified environment that those conducting the experiment are interested in testing. Experiments of this sort can be single-blind or double-blind. In single-blind experiments, the subjects don't know which group they belong to. In double-blind experiments, those conducting the experiments also don't know which group the subjects they're interacting with belong to. This safeguards the experiments against biases that can be introduced by participants being aware of which group they belong to. For example, participants could get more engaged if they believe they're in the test group because this is newer in some way. Or, an experimenter could treat a subject differently in a subtle way because of the group that they belong to. As the computer is the one that directly conducts the experiment, and because those visiting your website aren't aware of which group they belong to, website A/B testing is generally an example of double-blind experiments. Of course, this is an argument for only conducting the test on new visitors. Otherwise, the user might recognize that the design has changed and throw the experiment away. For example, the users may be more likely to click on a new button when they recognize that the button is, in fact, new. However, if they are new to the site as a whole, then the button itself may not stand out enough to warrant extra attention. In some cases, these subjects can test more variant sites. This divides the test subjects into more groups. There needs to be more subjects available in order to compensate for this. Otherwise, the experiment's statistical validity might be in jeopardy. If each group doesn't have enough subjects, and therefore observations, then there is a larger error rate for the test, and results will need to be more extreme to be significant. In general, though, you'll want to have as many subjects as you reasonably can. Of course, this is always a trade-off. Getting 500 or 1000 subjects may take a while, given the typical traffic of many websites, but you still need to take action within a reasonable amount of time and put the results of the experiment into effect. So we'll talk later about how to determine the number of subjects that you actually need to get a certain level of significance. Another wrinkle that is you'll want to know as soon as possible is whether one option is clearly better or not so that you can begin to profit from it early. In the multi-armed bandit problem, this is a problem of exploration versus exploitation. This refers to the tension in the experiment design (and other domain) between exploring the problem space and exploiting the resources you've found in the experiment so far. We won't get into this further, but it is a factor to stay aware of as you perform A/B tests in the future. Because of the power and simplicity of A/B testing, it's being widely used in a variety of domains. For example, marketing and advertising make extensive use of it. Also, it has become a powerful way to test and improve measurable interactions between your website and those who visit it online. The primary requirement is that the interaction be somewhat limited and very measurable. Interesting would not make a good metric; the click-through rate or pages visited, however, would. Because of this, A/B tests validate changes in the placement or in the text of buttons that call for action from the users. For example, a test might compare the performance of Click for more! against Learn more now!. Another test may check whether a button placed in the upper-right section increases sales versus one in the center of the page. These changes are all incremental, and you probably don't want to break a large site redesign into pieces and test all of them individually. In a larger redesign, several changes may work together and reinforce each other. Testing them incrementally and only applying the ones that increase some metric can result in a design that's not aesthetically pleasing, is difficult to maintain, and costs you users in the long run. In these cases, A/B testing is not recommended. Some other things that are regularly tested in A/B tests include the following parts of a web page: The wording, size, and placement of a call-to-action button The headline and product description The length, layout, and fields in a form The overall layout and style of the website as a larger test, which is not broken down The pricing and promotional offers of products The images on the landing page The amount of text on a page Now that we have an understanding of what A/B testing is and what it can do for us, let's see what it will take to set up and perform an A/B test. Conducting an A/B test In creating an A/B test, we need to decide several things, and then we need to put our plan into action. We'll walk through those decisions here and create a simple set of web pages that will test the aspects of design that we are interested in changing, based upon the behavior of the user. Before we start building stuff, though, we need to think through our experiment and what we'll need to build. Planning the experiment For this article, we're going to pretend that we have a website for selling widgets (or rather, looking at the website Widgets!). The web page in this screenshot is the control page. Currently, we're getting 24 percent click-through on it from the Learn more! button. We're interested in the text of the button. If it read Order now! instead of Learn more!, it might generate more click-through. (Of course, actually explaining what the product is and what problems it solves might be more effective, but one can't have everything.) This will be the test page, and we're hoping that we can increase the click-through rate to 29 percent (a five percent absolute increase). Now that we have two versions of the page to experiment with, we can frame the experiment statistically and figure out how many subjects we'll need for each version of the page in order to achieve a statistically meaningful increase in the click-through rate on that button. Framing the statistics First, we need to frame our experiment in terms of the null-hypothesis test. In this case, the null hypothesis would look something like this: Changing the button copy from Learn more! to Order now! Would not improve the click-through rate. Remember, this is the statement that we're hoping to disprove (or fail to disprove) in the course of this experiment. Now we need to think about the sample size. This needs to be fixed in advance. To find the sample size, we'll use the standard error formula, which will be solved to get the number of observations to make for about a 95 percent confidence interval in order to get us in the ballpark of how large our sample should be: In this, δ is the minimum effect to detect and σ² is the sample variance. If we are testing for something like a percent increase in the click-through, the variance is σ² = p(1 – p), where p is the initial click-through rate with the control page. So for this experiment, the variance will be 0.24(1-0.24) or 0.1824. This would make the sample size for each variable 16(0.1824 / 0.052) or almost 1170. The code to compute this in Clojure is fairly simple: (defn get-target-sample [rate min-effect] (let [v (* rate (- 1.0 rate))] (* 16.0 (/ v (* min-effect min-effect))))) Running the code from the prompt gives us the response that we expect: user=> (get-target-sample 0.24 0.05) 1167.36 Part of the reason to calculate the number of participants needed is that monitoring the progress of the experiment and stopping it prematurely can invalidate the results of the test because it increases the risk of false positives where the experiment says it has disproved the null hypothesis when it really hasn't. This seems counterintuitive, doesn't it? Once we have significant results, we should be able to stop the test. Let's work through it. Let's say that in actuality, there's no difference between the control page and the test page. That is, both sets of copy for the button get approximately the same click-through rate. If we're attempting to get p ≤ 0.05, then it means that the test will return a false positive five percent of the time. It will incorrectly say that there is a significant difference between the click-through rates of the two buttons five percent of the time. Let's say that we're running the test and planning to get 3,000 subjects. We end up checking the results of every 1,000 participants. Let's break down what might happen: Run A B C D E F G H 1000 No No No No Yes Yes Yes Yes 2000 No No Yes Yes No Yes No Yes 3000 No Yes No Yes No No Yes Yes Final No Yes No Yes No No Yes Yes Stopped No Yes Yes Yes Yes Yes Yes Yes Let's read this table. Each lettered column represents a scenario for how the significance of the results may change over the run of the test. The rows represent the number of observations that have been made. The row labeled Final represents the experiment's true finishing result, and the row labeled Stopped represents the result if the experiment is stopped as soon as a significant result is seen. The final results show us that out of eight different scenarios, the final result would be significant in four cases (B, D, G, and H). However, if the experiment is stopped prematurely, then it will be significant in seven cases (all but A). The test could drastically over-generate false positives. In fact, most statistical tests assume that the sample size is fixed before the test is run. It's exciting to get good results, so we'll design our system so that we can't easily stop it prematurely. We'll just take that temptation away. With this in mind, let's consider how we can implement this test. Building the experiment There are several options to actually implement the A/B test. We'll consider several of them and weigh their pros and cons. Ultimately, the option that works best for you really depends on your circumstances. However, we'll pick one for this article and use it to implement the test for it. Looking at options to build the site The first way to implement A/B testing is to use a server-side implementation. In this case, all of the processing and tracking is handled on the server, and visitors' actions would be tracked using GET or POST parameters on the URL for the resource that the experiment is attempting to drive traffic towards. The steps for this process would go something like the following ones: A new user visits the site and requests for the page that contains the button or copy that is being tested. The server recognizes that this is a new user and assigns the user a tracking number. It assigns the user to one of the test groups. It adds a row in a database that contains the tracking number and the test group that the user is part of. It returns the page to the user with the copy, image, or design that is reflective of the control or test group. The user views the returned page and decides whether to click on the button or link or not. If the server receives a request for the button's or link's target, it updates the user's row in the tracking table to show us that the interaction was a success, that is, that the user did a click-through or made a purchase. This way of handling it keeps everything on the server, so it allows more control and configuration over exactly how you want to conduct your experiment. A second way of implementing this would be to do everything using JavaScript (or ClojureScript, https://github.com/clojure/clojurescript). In this scenario, the code on the page itself would randomly decide whether the user belonged to the control or the test group, and it would notify the server that a new observation in the experiment was beginning. It would then update the page with the appropriate copy or image. Most of the rest of this interaction is the same as the one in previous scenario. However, the complete steps are as follows: A new user visits the site and requests for the page that contains the button or copy being tested. The server inserts some JavaScript to handle the A/B test into the page. As the page is being rendered, the JavaScript library generates a new tracking number for the user. It assigns the user to one of the test groups. It renders that page for the group that the user belongs to, which is either the control group or the test group. It notifies the server of the user's tracking number and the group. The server takes this notification and adds a row for the observation in the database. The JavaScript in the browser tracks the user's next move either by directly notifying the server using an AJAX call or indirectly using a GET parameter in the URL for the next page. The server receives the notification whichever way it's sent and updates the row in the database. The downside of this is that having JavaScript take care of rendering the experiment might take slightly longer and may throw off the experiment. It's also slightly more complicated, because there are more parts that have to communicate. However, the benefit is that you can create a JavaScript library, easily throw a small script tag into the page, and immediately have a new A/B experiment running. In reality, though, you'll probably just use a service that handles this and more for you. However, it still makes sense to understand what they're providing for you, and that's what this article tries to do by helping you understand how to perform an A/B test so that you can be make better use of these A/B testing vendors and services.
Read more
  • 0
  • 0
  • 2308
Modal Close icon
Modal Close icon