Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-article-creating-basic-interactions
Packt
22 May 2014
11 min read
Save for later

Creating Basic Interactions

Packt
22 May 2014
11 min read
(For more resources related to this topic, see here.) "Learning is not attained by chance, it must be sought for with ardor and diligence." - Abigail Adams We joke that Elizabeth is a true designer in the sense that her right brain will be on fire when she approaches her work, so shifting to logic is tricky for her. Despite this, she has been able to build rather sophisticated prototypes. So, while Axure 7 supports the creation of highly advanced rapid prototypes, the key to success for someone who does not have pseudo code running easily through their mind is: approaching interactivity with an open mind, writing down in plain language what the desired interaction should be, and the willingness to seek help from a colleague or online tutorials. The basic model of creating interactivity in an Axure prototype involves four hierarchical building blocks: Interactions, Events, Cases, and Actions. Interactions are triggered by events, which cause cases to execute actions. These four topics are the focus of this Article. Axure Interactions Client expectations of a good user experience continue to rise, and it is clear that we are in the midst of an enormous transition in software design. This, along with the spread of Responsive Web Design (RWD), has placed UX front and center of the web design process. Early in that process is the need to "sell" your vision of the user experience to stakeholders and you have a better chance of success if they have to be engaged as early as possible, starting at the wireframe level. There is less tolerance and satisfaction with static annotated wireframes, which requires an effort on the part of stakeholders to imagine the fluidity of the expected functionality. Axure enables designers to rapidly simulate highly engaging user experiences that can be reviewed and tested on target devices as static wireframes are transformed into dynamic prototypes. In this article, we focus on how to make the transition from static to interactive, using simple, yet wickedly effective interactions. Interactions are Axure's term for the building blocks that turn static wireframes into clickable, interactive HTML prototypes. Axure shields us from the complexities of coding by providing a simple, wizard-like interface for defining instructions and logic in English. Each time we generate the HTML prototype, Axure converts the interactions into real code (JavaScript, HTML, and CSS), which a web browser can understand. Note however, that this code is not production grade code. Each Axure interaction is composed, in essence, of three basic units of information—when, where, and what: When does an interaction happen?: The Axure terminology for "when" is events, and some examples for discrete events include: When the page is loaded in the browser. When a user clicks on a widget, such as a button. When the user tabs out of a form field. A list of events can be seen on the Interactions tab in the Widget Interactions and Notes pane on the right-hand side of the screen. You will also find the related list of events under the Page Interactions tab, which is located under your main workspace. Where can we find the interaction?: An interaction is attached either to a widget, such as a rectangle, radio button, or drop-down list; a page; or a master wireframe. You create widget interactions using the options in the Widget Properties pane, and page and master interactions using the options in the Page Properties pane. These are called cases. A single event can have one or more cases. What should happen?: The Axure terminology for "what", is actions. Actions define the outcome of the interaction. For example, when a page loads, you can instruct Axure on how the page should behave and what it will display when it is first rendered on the screen. More examples of this could be when the user clicks on a button, it will link to another page; when the user tabs out of a form field, the input will be validated and an error message. Ensure that all of the actions you want to include for that case or scenario are in the same case. Multiple Cases Sometimes, an event could have alternative paths, each with its own case(s). The determination of which path to trigger is controlled with conditional logic which we will cover later in this article. Axure Events In general, Axure interactions are triggered by two types of events, which are as follows: Page and master level events which can be triggered automatically, such as when the page is loaded in the browser, or as a result of a user action, such as scrolling. When a user directly interacts with a widget on the page. These interactions are typically triggered directly by the user, such as clicking on a button, or as a result of a user action which causes a number of events to follow. Page level Events Think this concept as a staging setup, an orchestration of actions that takes place behind the scenes and is executed as the page gets rendered in the browser. Moreover, it is a setup to which you can apply conditional logic and variables, and deliver a contextual rendering of the page. In short, events, which can be applied to pages and on masters, will likely become one of your frequently used methods to control your prototype. Keep in mind that the order in which the interactions you build into the prototype will be executed by the browser. The following Image 1 screenshot illustrates the OnPageLoad event as an example: The browser gets a request to load a page (Image 1, A), either because it is the first time you launch the prototype or as a result of navigation from one prototype page to another. The browser first checks for OnPageLoad interactions. An OnPageLoad event (B) may be associated with the loading page (C), a master used on the page (D), or both. If OnPageLoad exists, the browser first evaluates page-level interactions, and then master-level interactions. The benefits of this order of operations is that you can set the value of a variable on the page's OnPageLoad interaction and pass that variable to the master's OnPageLoad interaction. It sounds a bit complicated, perhaps. If the OnPageLoad interaction includes condition(s) (E), the browser will evaluate the logic and execute the appropriate action (F and/or G). Otherwise, if the OnPageLoad event does not include a condition, the browser will execute the interaction (H). The requested page is rendered (I) per the interaction. Image 1 The following table lists the events offered at a page level: Event names Definition OnPageLoad This event will trigger assigned action(s) that will impact how the page is initially rendered after it loads. OnWindowResize This event will trigger assigned action(s) when a browser is resized. OnWindowScroll This event will trigger assigned action(s) when the user scrolls the browser window. OnPageClick This event will trigger assigned action(s) when the user clicks on any empty part of the page (not clicking on any widget). OnPageDoubleClick This event will trigger assigned action(s) when the user double-clicks on any empty part of the page (not clicking on any widget). OnContextMenu This event will trigger assigned action(s) when the user right-clicks any empty a part of the page (not clicking on any widget). OnMouseMove This event will trigger assigned action(s) when the mouse pointer is moved anywhere on the page. OnPageKeyUp This event will trigger assigned action(s) when a pressed key is released. OnPageKeyDown This event will trigger assigned action(s) when a key is pressed. OnAdaptiveViewChange This event will trigger assigned action(s) on a switch from on adaptive view to another. Widget-level Events The OnClick event, whether using a mouse or tapping a finger, is one of the fundamental triggers of modern user-computer interactions. In Axure, this action is one of the several actions you can associate with a widget. The following Image 2 screenshot illustrates how widget-level events are processed: The user interacts with a widget by initiating an event (Image 2, A), such as OnClick, which is associated with that widget (B). The type of widget (Button, Checkbox, and so on) constrains the possible response the user can expect (D). For example, before clicking on a button, the user may move the mouse over it and the visual appearance of the button will change in response to the OnMouseEnter event. Axure includes events that can also handle mobile devices, the use of fingers, as means of enabling the user's direct manipulation of the interface. The browser will check if conditional logic is tied to the widget event (E). For example, you may have created an interaction in which a rollover event will display different states of a dynamic panel based on some variable. The browser will evaluate the condition and execute the action(s) (F and/or G). If no conditions exist, the browser will execute the action(s) associated with the widget (H). Based on the actions tied to the event, the browser will update the screen or load some other screen (I). Image 2 The following table lists Axure's inventory of events which can be applied to widgets and dynamic panels. Each widget has its own set of possible actions: Event names Dynamic panels Definition OnClick   The user clicks on an element. OnPanelStateChange X Dynamic panels may have multiple states and this event can be used to trigger action(s) when a dynamic panel changes states. OnDragStart X This event pinpoints the moment the user begins to drag a dynamic panel. OnDrag X This event spans the duration of the dynamic panel being dragged. OnDragDrop X This event pinpoints the moment the user finished dragging the dynamic panel. This could be an opportunity to validate that, for example, the user placed the widget in the right place. OnSwipeLeft X The event will trigger assigned action(s) when the user swipes from right to left. OnSwipeRight X The event will trigger assigned action(s) when the user swipes from left to right. OnSwipeUp X The event will trigger assigned action(s) when the user swipes upwards. OnSwipeDown X The event will trigger assigned action(s) when the user swipes downwards. OnDoubleClick   The event will trigger assigned action(s) when the user double-clicks on an element. OnContextMenu   The event will trigger assigned action(s) when the user right-clicks on an element. OnMouseDown   The event will trigger assigned action(s) when the user clicks on the element but has yet to release the mouse button. OnMouseUp   The event will trigger assigned action(s) on the release of the mouse button. OnMouseMove   The event will trigger assigned action(s) when the user moves the cursor. OnMouseEnter   The event will trigger assigned action(s) when the cursor is moved over an element OnMouseOut   The event will trigger assigned action(s) when the cursor is moved away from an element. OnMouseHover   The event will trigger assigned action(s) when the cursor is placed over an element. This is great for custom tooltips. OnLongClick   This is great to use on a touchscreen. Use this when a user clicks on the element and holds it. OnKeyDown   The event will trigger assigned action(s) as the user presses a key on the keyboard. It can be attached to any widget, but the action is only sent to the widget that has focus. OnKeyUp   The event will trigger assigned action(s) as the user releases a pressed key on the keyboard. OnMove   The event will trigger assigned action(s) when the referenced widget moves. OnShow   The event will trigger assigned action(s) when the visibility state of the referenced widget changes to Show. OnHide   The event will trigger assigned action(s) when the visibility state of the referenced widget changes to Show OnScroll X The event will trigger assigned action(s) when the user is scrolling. Good to use in conjunction with the Pin to Browser feature. OnResize X The event will trigger assigned action(s) when it detects that referenced panel has been resized. OnLoad X The dynamic panel is initiated when a page is loaded. OnFocus     The event will trigger assigned action(s) when the widget comes into focus. OnLostFocus   The event will trigger assigned action(s) when the widget loses focus. OnSelectionChange   This event is only applicable to drop-down lists and is typically used with a condition: when selected option of X, show this. Use this when you want a selection option to trigger action(s) that will change something on the wireframe. OnCheckedChange   This event is only applicable to radio buttons and textboxes. Use this when you want a selection option to trigger action(s) that will change something on the wireframe.
Read more
  • 0
  • 0
  • 1919

Packt
22 May 2014
13 min read
Save for later

A/B Testing – Statistical Experiments for the Web

Packt
22 May 2014
13 min read
(For more resources related to this topic, see here.) Defining A/B testing At its most fundamental level, A/B testing just involves creating two different versions of a web page. Sometimes, the changes are major redesigns of the site or the user experience, but usually, the changes are as simple as changing the text on a button. Then, for a short period of time, new visitors are randomly shown one of the two versions of the page. The site tracks their behavior, and the experiment determines whether one version or the other increases the users' interaction with the site. This may mean more click-through, more purchases, or any other measurable behavior. This is similar to other methods in other domains that use different names. The basic framework randomly tests two or more groups simultaneously and is sometimes called random-controlled experiments or online-controlled experiments. It's also sometimes referred to as split testing, as the participants are split into two groups. These are all examples of between-subjects experiment design. Experiments that use these designs all split the participants into two groups. One group, the control group, gets the original environment. The other group, the test group, gets the modified environment that those conducting the experiment are interested in testing. Experiments of this sort can be single-blind or double-blind. In single-blind experiments, the subjects don't know which group they belong to. In double-blind experiments, those conducting the experiments also don't know which group the subjects they're interacting with belong to. This safeguards the experiments against biases that can be introduced by participants being aware of which group they belong to. For example, participants could get more engaged if they believe they're in the test group because this is newer in some way. Or, an experimenter could treat a subject differently in a subtle way because of the group that they belong to. As the computer is the one that directly conducts the experiment, and because those visiting your website aren't aware of which group they belong to, website A/B testing is generally an example of double-blind experiments. Of course, this is an argument for only conducting the test on new visitors. Otherwise, the user might recognize that the design has changed and throw the experiment away. For example, the users may be more likely to click on a new button when they recognize that the button is, in fact, new. However, if they are new to the site as a whole, then the button itself may not stand out enough to warrant extra attention. In some cases, these subjects can test more variant sites. This divides the test subjects into more groups. There needs to be more subjects available in order to compensate for this. Otherwise, the experiment's statistical validity might be in jeopardy. If each group doesn't have enough subjects, and therefore observations, then there is a larger error rate for the test, and results will need to be more extreme to be significant. In general, though, you'll want to have as many subjects as you reasonably can. Of course, this is always a trade-off. Getting 500 or 1000 subjects may take a while, given the typical traffic of many websites, but you still need to take action within a reasonable amount of time and put the results of the experiment into effect. So we'll talk later about how to determine the number of subjects that you actually need to get a certain level of significance. Another wrinkle that is you'll want to know as soon as possible is whether one option is clearly better or not so that you can begin to profit from it early. In the multi-armed bandit problem, this is a problem of exploration versus exploitation. This refers to the tension in the experiment design (and other domain) between exploring the problem space and exploiting the resources you've found in the experiment so far. We won't get into this further, but it is a factor to stay aware of as you perform A/B tests in the future. Because of the power and simplicity of A/B testing, it's being widely used in a variety of domains. For example, marketing and advertising make extensive use of it. Also, it has become a powerful way to test and improve measurable interactions between your website and those who visit it online. The primary requirement is that the interaction be somewhat limited and very measurable. Interesting would not make a good metric; the click-through rate or pages visited, however, would. Because of this, A/B tests validate changes in the placement or in the text of buttons that call for action from the users. For example, a test might compare the performance of Click for more! against Learn more now!. Another test may check whether a button placed in the upper-right section increases sales versus one in the center of the page. These changes are all incremental, and you probably don't want to break a large site redesign into pieces and test all of them individually. In a larger redesign, several changes may work together and reinforce each other. Testing them incrementally and only applying the ones that increase some metric can result in a design that's not aesthetically pleasing, is difficult to maintain, and costs you users in the long run. In these cases, A/B testing is not recommended. Some other things that are regularly tested in A/B tests include the following parts of a web page: The wording, size, and placement of a call-to-action button The headline and product description The length, layout, and fields in a form The overall layout and style of the website as a larger test, which is not broken down The pricing and promotional offers of products The images on the landing page The amount of text on a page Now that we have an understanding of what A/B testing is and what it can do for us, let's see what it will take to set up and perform an A/B test. Conducting an A/B test In creating an A/B test, we need to decide several things, and then we need to put our plan into action. We'll walk through those decisions here and create a simple set of web pages that will test the aspects of design that we are interested in changing, based upon the behavior of the user. Before we start building stuff, though, we need to think through our experiment and what we'll need to build. Planning the experiment For this article, we're going to pretend that we have a website for selling widgets (or rather, looking at the website Widgets!). The web page in this screenshot is the control page. Currently, we're getting 24 percent click-through on it from the Learn more! button. We're interested in the text of the button. If it read Order now! instead of Learn more!, it might generate more click-through. (Of course, actually explaining what the product is and what problems it solves might be more effective, but one can't have everything.) This will be the test page, and we're hoping that we can increase the click-through rate to 29 percent (a five percent absolute increase). Now that we have two versions of the page to experiment with, we can frame the experiment statistically and figure out how many subjects we'll need for each version of the page in order to achieve a statistically meaningful increase in the click-through rate on that button. Framing the statistics First, we need to frame our experiment in terms of the null-hypothesis test. In this case, the null hypothesis would look something like this: Changing the button copy from Learn more! to Order now! Would not improve the click-through rate. Remember, this is the statement that we're hoping to disprove (or fail to disprove) in the course of this experiment. Now we need to think about the sample size. This needs to be fixed in advance. To find the sample size, we'll use the standard error formula, which will be solved to get the number of observations to make for about a 95 percent confidence interval in order to get us in the ballpark of how large our sample should be: In this, δ is the minimum effect to detect and σ² is the sample variance. If we are testing for something like a percent increase in the click-through, the variance is σ² = p(1 – p), where p is the initial click-through rate with the control page. So for this experiment, the variance will be 0.24(1-0.24) or 0.1824. This would make the sample size for each variable 16(0.1824 / 0.052) or almost 1170. The code to compute this in Clojure is fairly simple: (defn get-target-sample [rate min-effect] (let [v (* rate (- 1.0 rate))] (* 16.0 (/ v (* min-effect min-effect))))) Running the code from the prompt gives us the response that we expect: user=> (get-target-sample 0.24 0.05) 1167.36 Part of the reason to calculate the number of participants needed is that monitoring the progress of the experiment and stopping it prematurely can invalidate the results of the test because it increases the risk of false positives where the experiment says it has disproved the null hypothesis when it really hasn't. This seems counterintuitive, doesn't it? Once we have significant results, we should be able to stop the test. Let's work through it. Let's say that in actuality, there's no difference between the control page and the test page. That is, both sets of copy for the button get approximately the same click-through rate. If we're attempting to get p ≤ 0.05, then it means that the test will return a false positive five percent of the time. It will incorrectly say that there is a significant difference between the click-through rates of the two buttons five percent of the time. Let's say that we're running the test and planning to get 3,000 subjects. We end up checking the results of every 1,000 participants. Let's break down what might happen: Run A B C D E F G H 1000 No No No No Yes Yes Yes Yes 2000 No No Yes Yes No Yes No Yes 3000 No Yes No Yes No No Yes Yes Final No Yes No Yes No No Yes Yes Stopped No Yes Yes Yes Yes Yes Yes Yes Let's read this table. Each lettered column represents a scenario for how the significance of the results may change over the run of the test. The rows represent the number of observations that have been made. The row labeled Final represents the experiment's true finishing result, and the row labeled Stopped represents the result if the experiment is stopped as soon as a significant result is seen. The final results show us that out of eight different scenarios, the final result would be significant in four cases (B, D, G, and H). However, if the experiment is stopped prematurely, then it will be significant in seven cases (all but A). The test could drastically over-generate false positives. In fact, most statistical tests assume that the sample size is fixed before the test is run. It's exciting to get good results, so we'll design our system so that we can't easily stop it prematurely. We'll just take that temptation away. With this in mind, let's consider how we can implement this test. Building the experiment There are several options to actually implement the A/B test. We'll consider several of them and weigh their pros and cons. Ultimately, the option that works best for you really depends on your circumstances. However, we'll pick one for this article and use it to implement the test for it. Looking at options to build the site The first way to implement A/B testing is to use a server-side implementation. In this case, all of the processing and tracking is handled on the server, and visitors' actions would be tracked using GET or POST parameters on the URL for the resource that the experiment is attempting to drive traffic towards. The steps for this process would go something like the following ones: A new user visits the site and requests for the page that contains the button or copy that is being tested. The server recognizes that this is a new user and assigns the user a tracking number. It assigns the user to one of the test groups. It adds a row in a database that contains the tracking number and the test group that the user is part of. It returns the page to the user with the copy, image, or design that is reflective of the control or test group. The user views the returned page and decides whether to click on the button or link or not. If the server receives a request for the button's or link's target, it updates the user's row in the tracking table to show us that the interaction was a success, that is, that the user did a click-through or made a purchase. This way of handling it keeps everything on the server, so it allows more control and configuration over exactly how you want to conduct your experiment. A second way of implementing this would be to do everything using JavaScript (or ClojureScript, https://github.com/clojure/clojurescript). In this scenario, the code on the page itself would randomly decide whether the user belonged to the control or the test group, and it would notify the server that a new observation in the experiment was beginning. It would then update the page with the appropriate copy or image. Most of the rest of this interaction is the same as the one in previous scenario. However, the complete steps are as follows: A new user visits the site and requests for the page that contains the button or copy being tested. The server inserts some JavaScript to handle the A/B test into the page. As the page is being rendered, the JavaScript library generates a new tracking number for the user. It assigns the user to one of the test groups. It renders that page for the group that the user belongs to, which is either the control group or the test group. It notifies the server of the user's tracking number and the group. The server takes this notification and adds a row for the observation in the database. The JavaScript in the browser tracks the user's next move either by directly notifying the server using an AJAX call or indirectly using a GET parameter in the URL for the next page. The server receives the notification whichever way it's sent and updates the row in the database. The downside of this is that having JavaScript take care of rendering the experiment might take slightly longer and may throw off the experiment. It's also slightly more complicated, because there are more parts that have to communicate. However, the benefit is that you can create a JavaScript library, easily throw a small script tag into the page, and immediately have a new A/B experiment running. In reality, though, you'll probably just use a service that handles this and more for you. However, it still makes sense to understand what they're providing for you, and that's what this article tries to do by helping you understand how to perform an A/B test so that you can be make better use of these A/B testing vendors and services.
Read more
  • 0
  • 0
  • 2308

article-image-virtual-machine-design
Packt
21 May 2014
8 min read
Save for later

Virtual Machine Design

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) Causes of virtual machine performance problems In a perfect virtual infrastructure, you will never experience any performance problems and everything will work well within the budget that you allocated. But should there be circumstances that happen in this perfect utopian datacenter you've designed, hopefully this section will help you to identify and resolve the problems easier. CPU performance issues The following is a summary list of some the common CPU performance issues you may experience in your virtual infrastructure. While the following is not an exhaustive list of every possible problem you can experience with CPUs, it can help guide you in the right direction to solve CPU-related performance issues: High ready time: When your ready time is above 10 percent, this could indicate CPU contention and could be impacting the performance of any CPU-intensive applications. This is not a guarantee of a problem; however, applications which are not nearly as sensitive can still report high values and perform well within guidelines. Whether your application is CPU-ready is measured in milliseconds to get percentage conversion; see KB 2002181. High costop time: The costop time will often correlate to contention in multi-vCPU virtual machines. Costop time exceeding 10 percent could cause challenges when vSphere tries to schedule all vCPUs in your multi-vCPU servers. CPU limits: As discussed earlier, you will often experience performance problems if your virtual machine tries to use more resources than have been configured in your limits. Host CPU saturation: When the vSphere host utilization runs at above 80 percent, you may experience host saturation issues. This can introduce performance problems across the host as the CPU scheduler tries to assign resources to virtual machines. Guest CPU saturation: This is experienced on high utilization of vCPU resources within the operating system of your virtual machines. This can be mitigated, if required, by adding additional vCPUs to improve the performance of the application. Misconfigured affinity: Affinity is enabled by default in vSphere; however, if manually configured to be assigned to a specific physical CPU, problems can be encountered. This can often be experienced when creating a VM with affinity settings and then cloning the VM. VMware advises against manually configuring affinity. Oversizing vCPUs: When assigning multiple vCPUs to a virtual machine, you would want to ensure that the operating system is able to take advantage of the CPUs, threads, and your applications can support them. The overhead associated with unused vCPUs can impact other applications and resource scheduling within the vSphere host. Low guest usage: Sometimes poor performance problems with low CPU utilization will help identify the problem existing as I/O or memory. This is often a good guiding indicator that your CPU being underused can be caused by additional resources or even configuration. Memory performance issues Additionally, the following list is a summary of some common memory performance issues you may experience in your virtual infrastructure. The way VMware vSphere handles memory management, there is a unique set of challenges with troubleshooting and resolving performance problems as they arise: Host memory: Host memory is both a finite and very limited resource. While VMware vSphere incorporates some creative mechanisms to leverage and maximize the amount of available memory through features such as page sharing, memory management, and resource-allocation controls, there are several memory features that will only take effect when the host is under stress. Transparent page sharing: This is the method by which redundant copies of pages are eliminated. TPS, enabled by default, will break up regular pages into 4 KB chunks for better performance. When virtual machines have large physical pages (2 MB instead of 4 KB), vSphere will not attempt to enable TPS for these as the likelihood of multiple 2 MB chunks being similar is less than 4 KB. This can cause a system to experience memory overcommit and performance problems may be experienced; if memory stress is then experienced, vSphere may break these 2 MB chunks into 4 KB chunks to allow TPS to consolidate the pages. Host memory consumed: When measuring utilization for capacity planning, the value of host memory consumed can often be deceiving as it does not always reflect the actual memory utilization. Instead, the active memory or memory demand should be used as a better guide of actual memory utilized as features such as TPS can reflect a more accurate picture of memory utilization. Memory over-allocation: Memory over-allocation will usually be fine for most applications in most environments. It is typically safe to have over 20 percent memory allocation especially with similar applications and operating systems. The more similarity you have between your applications and environment, the higher you can take that number. Swap to disk: If you over-allocate your memory too high, you may start to experience memory swapping to disk, which can result in performance problems if not caught early enough. It is best, in those circumstances, to evaluate which guests are swapping to disk to help correct either the application or the infrastructure as appropriate. For additional details on vSphere Memory management and monitoring, see KB 2017642. Storage performance issues When it comes to storage performance issues within your virtual machine infrastructure, there are a few areas you will want to pay particular attention to. Although most storage-related problems you are likely to experience will be more reliant upon your backend infrastructure, the following are a few that you can look at when identifying if it is the VM's storage or the SAN itself: Storage latency: Latency experienced at the storage level is usually expressed as a combination of the latency of the storage stack, guest operating system, VMkernel virtualization layer, and the physical hardware. Typically, if you experience slowness and are noticing high latencies, one or more aspects of your storage could be the cause. Three layers of latency: ESXi and vCenter typically report on three primary latencies. These are Guest Average Latency (GAVG), Device Average Latency (DAVG), and Kernel Average Latency (KAVG). Guest Average Latency (GAVG): This value is the total amount of latency that ESXi is able to detect. This is not to say that it is the total amount of latency being experienced but is just the figure of what ESXi is reporting against. So if you're experiencing a 5 ms latency with GAVG and a performance application such as Perfmon is identifying a storage latency of 50 ms, something within the guest operating system is incurring a penalty of 45 ms latency. In circumstances such as these, you should investigate the VM and its operating system to troubleshoot. Device Average Latency (DAVG): Device Average Latency tends to focus on the more physical of things aligned with the device; for instance, if the storage adapters, HBA, or interface is having any latency or communication backend to the storage array. Problems experienced here tend to fall more on the storage itself and less so as a problem which can be easily troubleshooted within ESXi itself. Some exceptions to this being firmware or adapter drivers, which may be introducing problems or queue depth in your HBA. More details on queue depth can be found at KB 1267. Kernel Average Latency (KAVG): Kernel Average Latency is actually not a specific number as it is a calculation of "Total Latency - DAVG = KAVG"; thus, when using this metric you should be wary of a few values. The typical value of KAVG should be zero, anything greater may be I/O moving through the kernel queue and can be generally dismissed. When your latencies are 2 ms or consistently greater, this may indicate a storage performance issue with your VMs, adapters, and queues should be reviewed for bottlenecks or problems. The following are some KB articles that can help you further troubleshoot virtual machine storage: Using esxtop to identify storage performance issues (KB1008205) Troubleshooting ESX/ESXi virtual machine performance issues (KB2001003) Testing virtual machine storage I/O performance for VMware ESX and ESXi (KB1006821) Network performance issues Lastly, when it comes to addressing network performance issues, there are a few areas you will want to consider. Similar to the storage performance issues, a lot of these are often addressed by the backend networking infrastructure. However, there are a few items you'll want to investigate within the virtual machines to ensure network reliability. Networking error, IP already assigned to another adapter: This is a common problem experienced post V2V or P2V migrations, which results in ghosted network adapters. VMware KB 1179 guides you through the steps to go about removing these ghosted network adapters. Speed or duplex mismatch within the OS: Left at defaults, the virtual machine will use auto-negotiation to get maximum network performance; if configured down from that speed, this can introduce virtual machine limitations. Choose the correct network adapter for your VM: Newer operating systems should support the VMXNET3 adapter while some virtual machines, either legacy or upgraded from previous versions, may run older network adapter types. See KB 1001805 to help decide which adapters are correct for your usage. The following are some KB articles that can help you further troubleshoot virtual machine networking: Troubleshooting virtual machine network connection issues (KB 1003893) Troubleshooting network performance issues in a vSphere environment (KB 1004097) Summary With this article, you should be able to inspect existing VMs while following design principles that will lead to correctly sized and deployed virtual machines. You also should have a better understanding of when your configuration is meeting your needs, and how to go about identifying performance problems associated with your VMs. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article] Networking Performance Design [Article]
Read more
  • 0
  • 0
  • 9681

article-image-introduction-terminal
Packt
21 May 2014
19 min read
Save for later

An Introduction to the Terminal

Packt
21 May 2014
19 min read
(For more resources related to this topic, see here.) Why should we use the terminal? With Mint containing a complete suite of graphical tools, one may wonder why it is useful to learn and use the terminal at all. Depending on the type of user, learning how to execute commands in a terminal may or may not be beneficial. If you are a user who intends to use Linux only for basic purposes such as browsing the Internet, checking e-mails, playing games, editing documents, printing, watching videos, listening to music, and so on, terminal commands may not be a useful skill to learn as all of these activities (as well as others) are best handled by a graphical desktop environment. However, the real value of the terminal in Linux comes with advanced administration. Some administrative activities are faster using shell commands than using the GUI. For example, if you wanted to edit the /etc/fstab file, it would take fewer steps to type sudo nano /etc/fstab than it would to open a file manager with root permissions, navigate to the /etc directory, find the fstab file, and click on it to open it. This is especially true if all you want to do is make a quick change. Similarly, typing sudo apt-get install geany may be faster if you already know the name of the package you want, compared to opening up Mint Software Manager, waiting for it to load, finding the geany package, and installing it. On older and slower systems, the overhead caused by graphical programs may delay execution time. Another value in the Linux Shell is scripting. With a script, you can create a text file with a list of commands and instructions and execute all of the commands contained within a single execution. For example, you can create a list of packages that you would prefer to install on your system, type them out in a text file, and add your distribution package's installation command at the beginning of the list. Now, you can install all of your favorite programs with a single command. If you save this script for later, you can execute it any time you reinstall Linux Mint so that you can immediately have access to all your favorite programs. If you are administering a server, you can create a script to check the overall health of the system at various times, check for security intrusions, or even configure servers to send you weekly reports on just about anything you'd like to keep yourself updated on. There are entire books dedicated to scripting, so we won't go in detail about it in this article. However, by the end of the article, we will create a script to demonstrate how to do so. Accessing the shell When it comes to Linux, there is very rarely (if ever) a single way to do anything. Just like you have your pick between desktop environments, text editors, browsers, and just about anything else, you also have a choice when it comes to accessing a Linux terminal to execute shell commands. As a matter of fact, you even have a choice on which terminal emulator to use in order to interpret your commands. Linux Mint comes bundled with an application called the GNOME Terminal. This application is actually developed for a completely different desktop environment (GNOME) but is included in Mint because the Mint developers did not create their own terminal emulator for Cinnamon. The GNOME Terminal did the job very well, so there was no need to reinvent the wheel. Once you open the GNOME Terminal, it is ready to do your bidding right away. The following screenshot shows the GNOME terminal, ready for action: As mentioned earlier, there are other terminal emulators that are available. One of the popular terminal emulators is Konsole. It typically comes bundled with Linux distributions, which feature the KDE environment (such as Mint's own KDE edition). In addition, there is also the xfce4-terminal, which comes bundled with the Xfce environment. Although each terminal emulator is generally geared toward the desktop environment that features it, there's nothing stopping you from installing them if you find that GNOME Terminal doesn't suit your needs. However, each of the terminal emulators generally function in much the same way, and you may not notice much of a difference, especially when you're starting out. You may be wondering what exactly a terminal emulator is. A terminal emulator is a windowed application that runs in a graphical environment (such as Cinnamon in Mint) that provides you with a terminal window through which you can execute shell commands to interact with the system. In essence, a terminal emulator is emulating what a full-screen terminal may look like, but in an application window. Each terminal emulator in Linux gives you the ability to interact with that distribution's chosen shell, and as each of the various terminal emulators interact with the same shell, you won't notice anything unique about them regarding how commands are run. The differences between one terminal emulator and another are usually in the form of features in the graphical user interface, which surround the terminal window, such as being able to open new terminal windows in tabs instead of separate instances and even open transparent windows so that you can see what is behind your terminal window as you type. While learning about Linux, you'll often hear the term Bash when referring to the shell. Bash is a type of command interpreter that Linux uses; however, there are several others, including (but not limited to) the C shell, the Dash shell, and the Korn shell. When you interact with your Linux distribution through a terminal emulator, you are actually interacting with its shell. Bash itself is a successor to Bourne shell (originally created by Stephen Bourne) and is an acronym for "Bourne Again Shell." All distributions virtually include Bash as their default shell; it's the closest shell to a standard one in terms of shells that Linux has. As you start out on your Linux journey, Bash is the only shell you should concern yourself with and the only shell that will be covered in this article. Scripts are generally written against the shell environment in which they are intended to run. This is why when you read about writing scripts in Linux, you'll see them referred to as Bash Scripts as Bash is the target shell and pretty much the standard Linux shell. In addition, terminal emulators aren't the only way to access the Linux shell for entering commands. In fact, you don't even need to install a terminal emulator. You can use TTY (Teletype) terminals, which are full-screen terminals available for your use, by simply pressing a combination of keys on your keyboard. When you switch to a TTY terminal, you are switching away from your desktop environment to a dedicated text-mode console. You can access a TTY terminal by pressing Alt + Ctrl and one of the function keys (F1 through F6) at the same time. To switch back to Cinnamon, press Alt + Ctrl + F8. Not all distributions handle TTY terminals in the same way. For example, some start the desktop environment on TTY 7 (Alt + Ctrl + F7), and others may have a different number of TTYs available. If you are using a different flavor of Mint and Alt + Ctrl + F8 doesn't bring you back to your desktop environment, try Alt + Ctrl + F7 instead. You should notice that the terminal number changes each time you switch between TTY terminals. For example, if you press Alt + Ctrl + F1, you should see a heading that looks similar to Linux Mint XX ReleaseName HostName tty1 (notice the tty number at the end). If you press Alt + Ctrl + F2, you'll see a heading similar to Linux Mint XX ReleaseName HostName tty2. You should notice right away that the TTY number corresponds to the function key you used to access it. The benefit of a TTY is that it is an environment separate from your desktop environment, where you can run commands and large jobs. You can have a separate command running in each TTY, each independent of the others, without occupying space in your desktop environment. However, not everyone will find TTYs useful. It all depends on your use case and personal preferences. Regardless of how you access a terminal in Linux to practice entering your commands, all the examples in this article will work fine. In fact, it doesn't even matter if you use the bundled GNOME Terminal or another terminal emulator. Feel free to play around as each of them handles commands in the same way and will work fine for the purposes of this article. Executing commands While utilizing the shell and entering commands, you will find yourself in a completely different world compared to your desktop environment. While using the shell, you'll enter a command, wait for a confirmation that the command was successful (if applicable), and then you will be brought back to the prompt so that you can execute another command. In many cases, the shell simply returns to the prompt with no output. This constitutes a success. Be warned though; the Linux shell makes no assumptions. If you type something incorrectly, you will either see an error message or produce unexpected output. If you tell the shell to delete a file and you direct it to the wrong one, it typically won't prompt for confirmation and will bypass the trash folder. The Linux Shell does exactly what you tell it to, not necessarily what you want it to. Don't let that scare you though. The Linux Shell is very logical and easy to learn. However, with great power comes great responsibility. To get started, open your terminal emulator. You can either open the GNOME Terminal (you will find it in the application menu under Accessories or pinned to the left pane of the application menu by default) or switch to a TTY by pressing Ctrl + Alt +F1. You'll see a prompt that will look similar to the following: username@hostname ~$ Let's take a moment to examine the prompt. The first part of the prompt displays the username that the commands will be executed as. When you first open a terminal, it is opened under the user account that opened it. The second part of the prompt is the host name of the computer, which will be whatever you named it during the installation. Next, the path is displayed. In the preceding example, it's simply a tilde (~). The ~ character in Linux represents the currently logged-in user's home directory. Thus, in the preceding prompt, we can see that the current directory that the prompt is attached to is the user's home directory. Finally, a dollar sign symbol ($) is displayed. This represents that the commands are to be run as a normal user and not as a root user. For example, a user named C. Norris is using a machine named Neptune. This user opens a terminal and then switches to the /media directory. The prompt would then be similar to the following: cnorris@neptune /media $ Now that we have an understanding of the prompt, let's walk through some examples of entering some very basic commands, which are discussed in the following steps. Later in the article, we'll go over more complete examples; however, for now, let's take the terminal out for a spin. Open a prompt, type pwd, and press Enter. The pwd command stands for print working directory. In the output, it should display the complete path that the terminal is attached to. If you ever lose your way, the pwd command will save the day. Notice that the command prints the working directly and completes it. This means that it returns you right back to the prompt, ready to accept another command. Next, try the ls command. (That's "L" and "S", both lowercase). This stands for list storage. When you execute the ls command, you should see a list of the files saved in your current working directory. If there are no files in your working directory, you'll see no output. For a little bit of fun, try the following command: cowsay Linux Mint is Awesome This command shows that the Mint developers have a sense of humor and included the cowsay program in the default Mint installation. You can make the cow say anything you'd like, but be nice. The following screenshot shows the output of the preceding cowsay command, included in Mint for laughs: Navigating the filesystem Before we continue with more advanced terminal usage, it's important to understand how the filesystem is laid out in Linux as well as how to navigate it. First, we must clarify what exactly is meant by the term "filesystem" as it can refer to different things depending on the context. If you recall, when you installed Linux Mint, you formatted one or more partitions with a filesystem, most likely ext4. In this context, we're referring to the type of formatting applied to a hard-disk partition. There are many different filesystems available for formatting hard disk partitions, and this is true for all operating systems. However, there is another meaning to "filesystem" with regards to Linux. In the context of this article, filesystem refers to the default system of directories (also known as folders) in a Linux installation and how to navigate from one folder to another. The filesystem in an installed Linux system includes many different folders, each with its own purpose. In order to understand how to navigate between directories in a Linux filesystem, you should first have a basic understanding of what the folders are for. You can view the default directory structure in the Linux filesystem in one of the following two ways: One way is to open the Nemo file manager and click on File System on the left-hand side of the window. This will open a view of the default folders in Linux, as shown in the following screenshot: Additionally, you can execute the following command from your terminal emulator: ls -l / The following screenshot shows the output of the preceding command from the root of the filesystem: The first point to understand, especially if you're coming from Windows, is that there is no drive lettering in Linux. This means that there is no C drive for your operating system or D drive for your optical drive. The closest thing that the Linux filesystem has for a C: drive is a single forward slash, which represents the beginning of the filesystem. In Linux, everything is a subdirectory of /. When we executed the preceding command (ls -l /), we were telling the terminal emulator that we'd like a listing of / or the beginning of the drive. The -l flag tells the terminal emulator that we would like a long alphabetical listing rather than a horizontal one. Paths are written as shown in the following command line example. In this example, the path references the Music directory under Joe's home directory: /home/joe/Music The first slash (/home) references the beginning of the filesystem. If a path in Linux is typed starting with a single forward slash, this means that the path starts with the beginning of the drive. In the preceding example, if we start at the beginning of the filesystem, we'll see a directory there named home. Inside the home folder, we'll see another directory named joe. Inside the joe directory, we'll find another directory named Music. The cd command is used to change the directory from the current working directory, to the one that we want to work with. Let's demonstrate this with an example. First, let's say that the prompt Joe sees in his terminal is the following: joe@Mint ~ $ From this, we can deduce that the current working directory is Joe's home directory. We know this because the ~ character is shorthand for the user's home directory. Let's assume that Joe types the following:? pwd Then, his output will be as follows: /home/joe In his case, ~ is the same as /home/joe. Since Joe is currently in his home directory, he can see the contents of that directory by simply typing the following command: ls The Music directory that Joe wants to access would be shown in the output as its path is /home/joe/Music. To change the working directory of the terminal to /home/joe/Music, Joe can type the following: cd /home/joe/Music His prompt will change to the following: joe@Mint ~/Music $ However, the cd command does not make you type the full path. With the cd command, you can type an absolute or relative path. In the preceding command line using cd command, we referenced an absolute path. The absolute path is a path from the beginning of the disk (the single forward slash), and each directory from the beginning is completely typed out. In this example, it's unnecessary to type the full path because Joe is already in his home directory. As Music is a subdirectory of the directory he's already in, all he has to do is type the following command in order to get access to his Music directory: cd Music That's it. Without first typing a forward slash, the command interpreter understands that we are referencing a directory in the current working directory. If Joe was to use /Music as a path instead, this wouldn't work because there is no Music directory at the top level of his hard drive. If Joe wants to go back one level, he can enter the following command: cd.. Typing the cd command along with two periods tells the command interpreter that we would like to move backwards to the level above the one where we currently are. In this case, the command would return Joe back to his home directory. Finally, as if the difference between a filesystem in the context of hard drive formatting and filesystem in the context of directory structure wasn't confusing enough, there is another key term you should know for use with Linux. This term also has multiple meanings that change depending on the context in which you use it. The word is root. The user account named root is present on all Linux systems. The root account is the Alpha and Omega of the Linux system. The root user has the most permissions of any user on the system; root could even delete the entire filesystem and everything contained within it if necessary. Therefore, it's generally discouraged to use the root account for fear of a typo destroying your entire system. However, in regards to this article, when we talk about root, we're not talking about the root user account. There are actually two other meanings to the word root in Linux in regards to the filesystem. First, you'll often hear of someone referring to the root of the filesystem. They are referring to the single forward slash that represents the beginning of the filesystem. Second, there is a directory in the root of the filesystem named root. Its path is as follows: /root Linux administrators will refer to that directory as "slash root", indicating that it is a directory called root, and it is stored in the root (beginning) of the filesystem. So, what is the /root directory? The /root directory is the home directory for the root account. In this article, we have referred to the /home directory several times. In a Linux system, each user gets their own directory underneath /home. David's home directory would be /home/david and Cindy's home directory is likely to be /home/cindy. (Using lowercase for all user names is a common practice for Linux administrators). Notice, however, that there is no /home/root. The root account is special, and it does not have a home directory in /home as normal users would have. /root is basically the equivalent of a home directory for root. The /root directory is not accessible to ordinary users. For example, try the following command: ls /root The ls command by itself displays the contents of the current working directory. However, if we pass a path to ls, we're telling ls that we want to list the storage of a different directory. In the preceding command, we're requesting to list the storage of the /root directory. Unfortunately, we can't. The root account does not want its directories visible to mortal users. If you execute the command, it will give you an error message indicating that permission was denied. Like many Ubuntu-based distributions, the root account in Mint is actually disabled. Even though it's disabled, the /root directory still exists and the root account can be used but not directly logged in to. The takeaway is that you cannot actually log in as root. So far, we've covered the /home and /root subdirectories of /, but what about the rest? This section of the article will be closed with a brief description of what each directory is used for. Don't worry; you don't have to memorize them all. Just use this section as reference. /bin: This stores essential commands accessible to all users. The executables for commands such as ls are stored here. /boot: This stores the configuration information for the boot loader as well as the initial ramdisk for the boot sequence. /dev: This holds the location for devices to represent pieces of hardware, such as hard drives and sound cards. /etc: This stores the configuration files used in the system. Examples include the configuration for Samba, which handles cross-platform networking, as well as the fstab file, which stores mount points for hard disks. /home: As discussed earlier in the article, each user account gets its own directory underneath this directory for storing personal files. /lib: This stores the libraries needed for other binaries. /media: This directory serves as a place for removable media to be mounted. If you insert media (such as a flash drive), you'll find it underneath this directory. /mnt: This directory is used for manual mount points; /media is generally used instead, and this directory still exists as a holdover from the past. /opt: Additional programs can be installed here. /proc: Within /proc, you'll find virtual files that represent processes and kernel data. /root: This is the home directory for the root account. /sbin: This consists of super user program binaries. /tmp: This is a place for temporary files. /usr: This is a directory where utilities and applications can be stored for use by all users, but it is not modified directly by users other than the root user. /var: This is a directory where continually changing files, such as printer spools and logs, are stored.
Read more
  • 0
  • 0
  • 4917

Packt
21 May 2014
8 min read
Save for later

Running our first web application

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) The standalone/deployments directory, as in the previous releases of JBoss Application Server, is the location used by end users to perform their deployments and applications are automatically deployed into the server at runtime. The artifacts that can be used to deploy are as follows: WAR (Web application Archive): This is a JAR file used to distribute a collection of JSP (Java Server Pages), servlets, Java classes, XML files, libraries, static web pages, and several other features that make up a web application. EAR (Enterprise Archive): This type of file is used by Java EE for packaging one or more modules within a single file. JAR (Java Archive): This is used to package multiple Java classes. RAR (Resource Adapter Archive): This is an archive file that is defined in the JCA specification as the valid format for deployment of resource adapters on application servers. You can deploy a RAR file on the AS Java as a standalone component or as part of a larger application. In both cases, the adapter is available to all applications using a lookup procedure. The deployment in WildFly has some deployment file markers that can be identified quickly, both by us and by WildFly, to understand what is the status of the artifact, whether it was deployed or not. The file markers always have the same name as the artifact that will deploy. A basic example is the marker used to indicate that my-first-app.war, a deployed application, will be the dodeploy suffix. Then in the directory to deploy, there will be a file created with the name my-first-app.war.dodeploy. Among these markers, there are others, explained as follows: dodeploy: This suffix is inserted by the user, which indicates that the deployment scanner will deploy the artifact indicated. This marker is mostly important for exploded deployments. skipdeploy: This marker disables the autodeploy mode while this file is present in the deploy directory, only for the artifact indicated. isdeploying: This marker is placed by the deployment scanner service to indicate that it has noticed a .dodeploy file or a new or updated autodeploy mode and is in the process of deploying the content. This file will be erased by the deployment scanner so the deployment process finishes. deployed: This marker is created by the deployment scanner to indicate that the content was deployed in the runtime. failed: This marker is created by the deployment scanner to indicate that the deployment process failed. isundeploying: This marker is created by the deployment scanner and indicates the file suffix .deployed was deleted and its contents will be undeployed. This marker will be deleted when the process is completely undeployed. undeployed: This marker is created by the deployment scanner to indicate that the content was undeployed from the runtime. pending: This marker is placed by the deployment scanner service to indicate that it has noticed the need to deploy content but has not yet instructed the server to deploy it. When we deploy our first application, we'll see some of these marker files, making it easier to understand their functions. To support learning, the small applications that I made will be available on GitHub (https://github.com) and packaged using Maven (for further details about Maven, you can visit http://maven.apache.org/). To begin the deployment process, we perform a checkout of the first application. First of all you need to install the Git client for Linux. To do this, use the following command: [root@wfly_book ~]# yum install git –y Git is also necessary to perform the Maven installation so that it is possible to perform the packaging process of our first application. Maven can be downloaded from http://maven.apache.org/download.cgi. Once the download is complete, create a directory that will be used to perform the installation of Maven and unzip it into this directory. In my case, I chose the folder /opt as follows: [root@wfly_book ~]# mkdir /opt/maven Unzip the file into the newly created directory as follows: [root@wfly_book maven]# tar -xzvf /root/apache-maven-3.2.1-bin.tar.gz [root@wfly_book maven]# cd apache-maven-3.2.1/ Run the mvn command and, if any errors are returned, we must set the environment variable M3_HOME, described as follows: [root@wfly_book ~]# mvn -bash: mvn: command not found If the error indicated previously occurs again, it is because the Maven binary was not found by the operating system; in this scenario, we must create and configure the environment variable that is responsible for this. First, two settings, populate the environment variable with the Maven installation directory and enter the directory in the PATH environment variable in the necessary binaries. Access and edit the /etc/profile file, taking advantage of the configuration that we did earlier with the Java environment variable, and see how it will look with the Maven configuration file as well: #Java and Maven configuration export JAVA_HOME="/usr/java/jdk1.7.0_45" export M3_HOME="/opt/maven/apache-maven-3.2.1" export PATH="$PATH:$JAVA_HOME/bin:$M3_HOME/bin" Save and close the file and then run the following command to apply the following settings: [root@wfly_book ~]# source /etc/profile To verify the configuration performed, run the following command: [root@wfly_book ~]# mvn -version Well, now that we have the necessary tools to check out the application, let's begin. First, set a directory where the application's source codes will be saved as shown in the following command: [root@wfly_book opt]# mkdir book_apps [root@wfly_book opt]# cd book_apps/ Let's check out the project using the command, git clone; the repository is available at https://github.com/spolti/wfly_book.git. Perform the checkout using the following command: [root@wfly_book book_apps]# git clone https://github.com/spolti/wfly_book.git Access the newly created directory using the following command: [root@wfly_book book_apps]# cd wfly_book/ For the first example, we will use the application called app1-v01, so access this directory and build and deploy the project by issuing the following commands. Make sure that the WildFly server is already running. The first build is always very time-consuming, because Maven will download all the necessary libs to compile the project, project dependencies, and Maven libraries. [root@wfly_book wfly_book]# cd app1-v01/ [root@wfly_book app1-v01]# mvn wildfly:deploy For more details about the WildFly Maven plugin, please take a look at https://docs.jboss.org/wildfly/plugins/maven/latest/index.html. The artifact will be generated and automatically deployed on WildFly Server. Note that a message similar to the following is displayed stating that the application was successfully deployed: INFO [org.jboss.as.server] (ServerService Thread Pool -- 29) JBAS018559: Deployed "app1-v01.war" (runtime-name : "app1-v01.war") When we perform the deployment of some artifact, and if we have not configured the virtual host or context root address, then in order to access the application we always need to use the application name without the suffix, because our application's address will be used for accessing it. The structure to access the application is http://<your-ip-address>:<port-number>/app1-v01/. In my case, it would be http://192.168.11.109:8080/app1-v01/. See the following screenshot of the application running. This application is very simple and is made using JSP and rescuing some system properties. Note that in the deployments directory we have a marker file that indicates that the application was successfully deployed, as follows: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 07:33 app1-v01.war.deployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt To undeploy the application without having to remove the artifact, we need only remove the app1-v01.war.deployed file. This is done using the following command: [root@wfly_book ~]# cd $JBOSS_HOME/standalone/deployments [root@wfly_book deployments]# rm app1-v01.war.deployed rm: remove regular file `app1-v01.war.deployed'? y In the previous option, you will also need to press Y to remove the file. You can also use the WildFly Maven plugin for undeployment, using the following command: [root@wfly_book deployments]# mvn wildfly:undeploy Notice that the log is reporting that the application was undeployed and also note that a new marker, .undeployed, has been added indicating that the artifact is no longer active in the runtime server as follows: INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS018558: Undeployed "app1-v01.war" (runtime-name: "app1-v01.war") And run the following command: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 09:44 app1-v01.war.undeployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt [root@wfly_book deployments]# If you make undeploy using the WildFly Maven plugin, the artifact will be deleted from the deployments directory. Summary In this article, we learn how to configure an application using a virtual host, the context root, and also how to use the logging tools that we now have available to use Java in some of our test applications, among several other very interesting settings. Resources for Article: Further resources on this subject: JBoss AS Perspective [Article] JBoss EAP6 Overview [Article] JBoss RichFaces 3.3 Supplemental Installation [Article]
Read more
  • 0
  • 0
  • 1605

article-image-data-warehouse-design
Packt
20 May 2014
14 min read
Save for later

Data Warehouse Design

Packt
20 May 2014
14 min read
(For more resources related to this topic, see here.) Most companies are establishing or planning to establish a Business Intelligence system and a data warehouse (DW). Knowledge related to the BI and data warehouse are in great demand in the job market. This article gives you an understanding of what Business Intelligence and data warehouse is, what the main components of the BI system are, and what the steps to create the data warehouse are. This article focuses on the designing of the data warehouse, which is the core of a BI system. A data warehouse is a database designed for analysis, and this definition indicates that designing a data warehouse is different from modeling a transactional database. Designing the data warehouse is also called dimensional modeling. In this article, you will learn about the concepts of dimensional modeling. Understanding Business Intelligence Based on Gartner's definition (http://www.gartner.com/it-glossary/business-intelligence-bi/), Business Intelligence is defined as follows: Business Intelligence is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to and analysis of information to improve and optimize decisions and performance. As the definition states, the main purpose of a BI system is to help decision makers to make proper decisions based on the results of data analysis provided by the BI system. Nowadays, there are many operational systems in each industry. Businesses use multiple operational systems to simplify, standardize, and automate their everyday jobs and requirements. Each of these systems may have their own database, some of which may work with SQL Server, some with Oracle. Some of the legacy systems may work with legacy databases or even file operations. There are also systems that work through the Web via web services and XML. Operational systems are very useful in helping with day-to-day business operations such as the process of hiring a person in the human resources department, and sale operations through a retail store and handling financial transactions. The rising number of operational systems also adds another requirement, which is the integration of systems together. Business owners and decision makers not only need integrated data but also require an analysis of the integrated data. As an example, it is a common requirement for the decision makers of an organization to compare their hiring rate with the level of service provided by a business and the customer satisfaction based on that level of service. As you can see, this requirement deals with multiple operational systems such as CRM and human resources. The requirement might also need some data from sales and inventory if the decision makers want to bring sales and inventory factors into their decisions. As a supermarket owner or decision maker, it would be very important to understand what products in which branches were in higher demand. This kind of information helps you to provide enough products to cover demand, and you may even think about creating another branch in some regions. The requirement of integrating multiple operational systems together in order to create consolidated reports and dashboards that help decision makers to make a proper decision is the main directive for Business Intelligence. Some organizations and businesses use ERP systems that are integrated, so a question may appear in your mind that there won't be a requirement for integrating data because consolidated reports can be produced easily from these systems. So does that mean that these systems still require a BI solution? The answer in most cases is yes. The companies or businesses might not require a separate BI system for internal and parts of the operations that implemented it through ERP. However, they might require getting some data from outside, for example, getting some data from another vendor's web service or many other protocols and channels to send and receive information. This indicates that there would be a requirement for consolidated analysis for such information, which brings the BI requirement back to the table. The architecture and components of a BI system After understanding what the BI system is, it's time to discover more about its components and understand how these components work with each other. There are also some BI tools that help to implement one or more components. The following diagram shows an illustration of the architecture and main components of the Business Intelligence system: The BI architecture and components differ based on the tools, environment, and so on. The architecture shown in the preceding diagram contains components that are common in most of the BI systems. In the following sections, you will learn more about each component. The data warehouse The data warehouse is the core of the BI system. A data warehouse is a database built for the purpose of data analysis and reporting. This purpose changes the design of this database as well. As you know, operational databases are built on normalization standards, which are efficient for transactional systems, for example, to reduce redundancy. As you probably know, a 3NF-designed database for a sales system contains many tables related to each other. So, for example, a report on sales information may consume more than 10 joined conditions, which slows down the response time of the query and report. A data warehouse comes with a new design that reduces the response time and increases the performance of queries for reports and analytics. You will learn more about the design of a data warehouse (which is called dimensional modeling) later in this article. Extract Transform Load It is very likely that more than one system acts as the source of data required for the BI system. So there is a requirement for data consolidation that extracts data from different sources and transforms it into the shape that fits into the data warehouse, and finally, loads it into the data warehouse; this process is called Extract Transform Load (ETL). There are many challenges in the ETL process, out of which some will be revealed (conceptually) later in this article. According to the definition of states, ETL is not just a data integration phase. Let's discover more about it with an example; in an operational sales database, you may have dozen of tables that provide sale transactional data. When you design that sales data into your data warehouse, you can denormalize it and build one or two tables for it. So, the ETL process should extract data from the sales database and transform it (combine, match, and so on) to fit it into the model of data warehouse tables. There are some ETL tools in the market that perform the extract, transform, and load operations. The Microsoft solution for ETL is SQL Server Integration Service (SSIS), which is one of the best ETL tools in the market. SSIS can connect to multiple data sources such as Oracle, DB2, Text Files, XML, Web services, SQL Server, and so on. SSIS also has many built-in transformations to transform the data as required. Data model – BISM A data warehouse is designed to be the source of analysis and reports, so it works much faster than operational systems for producing reports. However, a DW is not that fast to cover all requirements because it is still a relational database, and databases have many constraints that reduce the response time of a query. The requirement for faster processing and a lower response time on one hand, and aggregated information on another hand causes the creation of another layer in BI systems. This layer, which we call the data model, contains a file-based or memory-based model of the data for producing very quick responses to reports. Microsoft's solution for the data model is split into two technologies: the OLAP cube and the In-memory tabular model. The OLAP cube is a file-based data storage that loads data from a data warehouse into a cube model. The cube contains descriptive information as dimensions (for example, customer and product) and cells (for example, facts and measures, such as sales and discount). The following diagram shows a sample OLAP cube: In the preceding diagram, the illustrated cube has three dimensions: Product, Customer, and Time. Each cell in the cube shows a junction of these three dimensions. For example, if we store the sales amount in each cell, then the green cell shows that Devin paid 23$ for a Hat on June 5. Aggregated data can be fetched easily as well within the cube structure. For example, the orange set of cells shows how much Mark paid on June 1 for all products. As you can see, the cube structure makes it easier and faster to access the required information. Microsoft SQL Server Analysis Services 2012 comes with two different types of modeling: multidimensional and tabular. Multidimensional modeling is based on the OLAP cube and is fitted with measures and dimensions, as you can see in the preceding diagram. The tabular model is based on a new In-memory engine for tables. The In-memory engine loads all data rows from tables into the memory and responds to queries directly from the memory. This is very fast in terms of the response time. The BI semantic model (BISM) provided by Microsoft is a combination of SSAS Tabular and Multidimensional solutions. Data visualization The frontend of a BI system is data visualization. In other words, data visualization is a part of the BI system that users can see. There are different methods for visualizing information, such as strategic and tactical dashboards, Key Performance Indicators (KPIs), and detailed or consolidated reports. As you probably know, there are many reporting and visualizing tools on the market. Microsoft has provided a set of visualization tools to cover dashboards, KPIs, scorecards, and reports required in a BI application. PerformancePoint, as part of Microsoft SharePoint, is a dashboard tool that performs best when connected to SSAS Multidimensional OLAP cube. Microsoft's SQL Server Reporting Services (SSRS) is a great reporting tool for creating detailed and consolidated reports. Excel is also a great slicing and dicing tool especially for power users. There are also components in Excel such as Power View, which are designed to build performance dashboards. Master Data Management Every organization has a part of its business that is common between different systems. That part of the data in the business can be managed and maintained as master data. For example, an organization may receive customer information from an online web application form or from a retail store's spreadsheets, or based on a web service provided by other vendors. Master Data Management (MDM) is the process of maintaining the single version of truth for master data entities through multiple systems. Microsoft's solution for MDM is Master Data Services (MDS). Master data can be stored in the MDS entities and it can be maintained and changed through the MDS Web UI or Excel UI. Other systems such as CRM, AX, and even DW can be subscribers of the master data entities. Even if one or more systems are able to change the master data, they can write back their changes into MDS through the staging architecture. Data Quality Services The quality of data is different in each operational system, especially when we deal with legacy systems or systems that have a high dependence on user inputs. As the BI system is based on data, the better the quality of data, the better the output of the BI solution. Because of this fact, working on data quality is one of the components of the BI systems. As an example, Auckland might be written as "Auckland" in some Excel files or be typed as "Aukland" by the user in the input form. As a solution to improve the quality of data, Microsoft provided users with DQS. DQS works based on Knowledge Base domains, which means a Knowledge Base can be created for different domains, and the Knowledge Base will be maintained and improved by a data steward as time passes. There are also matching policies that can be used to apply standardization on the data. Building the data warehouse A data warehouse is a database built for analysis and reporting. In other words, a data warehouse is a database in which the only data entry point is through ETL, and its primary purpose is to cover reporting and data analysis requirements. This definition clarifies that a data warehouse is not like other transactional databases that operational systems write data into. When there is no operational system that works directly with a data warehouse, and when the main purpose of this database is for reporting, then the design of the data warehouse will be different from that of transactional databases. If you recall from the database normalization concepts, the main purpose of normalization is to reduce the redundancy and dependency. The following table shows customers' data with their geographical information: Customer First Name Last Name Suburb City State Country Devin Batler Remuera Auckland Auckland New Zealand Peter Blade Remuera Auckland Auckland New Zealand Lance Martin City Center Sydney NSW Australia Let's elaborate on this example. As you can see from the preceding list, the geographical information in the records is redundant. This redundancy makes it difficult to apply changes. For example, in the structure, if Remuera, for any reason, is no longer part of the Auckland city, then the change should be applied on every record that has Remuera as part of its suburb. The following screenshot shows the tables of geographical information: So, a normalized approach is to retrieve the geographical information from the customer table and put it into another table. Then, only a key to that table would be pointed from the customer table. In this way, every time the value Remuera changes, only one record in the geographical region changes and the key number remains unchanged. So, you can see that normalization is highly efficient in transactional systems. This normalization approach is not that effective on analytical databases. If you consider a sales database with many tables related to each other and normalized at least up to the third normalized form (3NF), then analytical queries on such databases may require more than 10 join conditions, which slows down the query response. In other words, from the point of view of reporting, it would be better to denormalize data and flatten it in order to make it easier to query data as much as possible. This means the first design in the preceding table might be better for reporting. However, the query and reporting requirements are not that simple, and the business domains in the database are not as small as two or three tables. So real-world problems can be solved with a special design method for the data warehouse called dimensional modeling. There are two well-known methods for designing the data warehouse: the Kimball and Inmon methodologies. The Inmon and Kimball methods are named after the owners of these methodologies. Both of these methods are in use nowadays. The main difference between these methods is that Inmon is top-down and Kimball is bottom-up. In this article, we will explain the Kimball method. You can read more about the Inmon methodology in Building the Data Warehouse, William H. Inmon, Wiley (http://www.amazon.com/Building-Data-Warehouse-W-Inmon/dp/0764599445), and about the Kimball methodology in The Data Warehouse Toolkit, Ralph Kimball, Wiley (http://www.amazon.com/The-Data-Warehouse-Toolkit-Dimensional/dp/0471200247). Both of these books are must-read books for BI and DW professionals and are reference books that are recommended to be on the bookshelf of all BI teams. This article is referenced from The Data Warehouse Toolkit, so for a detailed discussion, read the referenced book. Dimensional modeling To gain an understanding of data warehouse design and dimensional modeling, it's better to learn about the components and terminologies of a DW. A DW consists of Fact tables and dimensions. The relationship between a Fact table and dimensions are based on the foreign key and primary key (the primary key of the dimension table is addressed in the fact table as the foreign key). Summary This article explains the first steps in thinking and designing a BI system. As the first step, a developer needs to design the data warehouse (DW) and needs an understanding of the key concepts of the design and methodologies to create the data warehouse. Resources for Article: Further resources on this subject: Self-service Business Intelligence, Creating Value from Data [Article] Oracle Business Intelligence : Getting Business Information from Data [Article] Business Intelligence and Data Warehouse Solution - Architecture and Design [Article]
Read more
  • 0
  • 0
  • 3221
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-continuous-integration
Packt
20 May 2014
14 min read
Save for later

Continuous Integration

Packt
20 May 2014
14 min read
(For more resources related to this topic, see here.) This article is named Continuous Integration; so, what exactly does this mean? You can find many long definitions, but to put it simply, it is a process where you integrate your code with code from other developers and run tests to verify the code functionality. You are aiming to detect problems as soon as possible and trying to fix problems immediately. It is always easier and cheaper to fix a couple of small problems than create one big problem. This can be translated to the following workflow: The change is committed to a version control system repository (such as Git or SVN). The Continuous Integration (CI) server is either notified of, or detects a change and then runs the defined tests. CI notifies the developer if the tests fail. With this method you immediately know who created the problem and when. For the CI to be able to run tests after every commit point, these tests need to be fast. Usually, you can do this with unit tests for integration, and with functional tests it might be better to run them within a defined time interval, for example, once every hour. You can have multiple sets of tests for each project, and another golden rule should be that no code is released to the production environment until all of the tests have been passed. It may seem surprising, but these rules and processes shouldn't make your work any slower, and in fact, should allow you to work faster and be more confident about the developed code functionality and changes. Initial investment pays off when you can focus on adding new functionality and are not spending time on tracking bugs and fixing problems. Also, tested and reliable code refers to code that can be released to the production environment more frequently than traditional big releases, which require a lot of manual testing and verification. There is a real impact on business, and it's not just about the discussion as to whether it is worthwhile and a good idea to write some tests and find yourself restricted by some stupid rules anymore. What will really help and is necessary is a CI server for executing tests and processing the results; this is also called test automation. Of course, in theory you can write a script for it and test it manually, but why would you do that when there are some really nice and proven solutions available? Save your time and energy to do something more useful. In this article, we will see what we can do with the most popular CI servers used by the PHP community: Travis CI Jenkins CI Xinc For us, a CI server will always have the same main task, that is, to execute tests, but to be precise, it includes the following steps: Check the code from the repository. Execute the tests. Process the results. Send a notification when tests fail. This is the bare minimum that a server must handle. Of course, there is much more to be offered, but these steps must be easy to configure. Using a Travis CI hosted service Travis is the easiest to use from the previously mentioned servers. Why is this the case? This is because you don't have to install it. It's a service that provides integration with GitHub for many programming languages, and not just for PHP. Primarily, it's a solution for open source projects, meaning your repository on GitHub is a public repository. It also has commercial support for private repositories and commercial projects. What is really good is that you don't have to worry about server configuration; instead, you just have to specify the required configuration (in the same way you do with Composer), and Travis does everything for you. You are not just limited to unit tests, and you can even specify which database you want to use and run ingratiation tests there. However, there is also a disadvantage to this solution. If you want to use it for a private repository, you have to pay for the service, and you are also limited with regard to the server configuration. You can specify your PHP version, but it's not recommended to specify a minor version such as 5.3.8; you should instead use a major version, such as 5.3. On the other hand, you can run tests against various PHP versions, such as PHP 5.3, 5.4, or 5.5, so when you want to upgrade your PHP version, you already have the test results and know how your code will behave with the new PHP version. Travis has become the CI server of choice for many open source projects, and it's no real surprise because it's really good! Setting up Travis CI To use Travis, you will need an account on GitHub. If you haven't got one, navigate to https://github.com/ and register there. When you have a GitHub account, navigate to https://travis-ci.org/ and click on Sign in with GitHub. As you can see in the preceding screenshot, there will be a Travis application added to your GitHub account. This application will work as a trigger that will start a build after any change is pushed onto the GitHub repository. To configure the Travis project, you have to follow these steps: You will be asked to allow Travis to access your account. When you do this you will go back to the Travis site, where you will see a list of your GitHub repositories. By clicking on On/Off, you can decide which project should be used by Travis. When you click on a project configuration, you will be taken to GitHub to enable the service hook. This is because you have to run a build after every commit, and Travis is going to be notified about this change. In the menu, search for Travis and fill in the details that you can find in your Travis account settings. Only the username and token are required, and the domain is optional. For a demonstration, you can refer to my sample project, where there is just one test suite, and its purpose is to test how Travis works (navigate to https://github.com/machek/travis): Using Travis CI When you link your GitHub account to Travis and set up a project to notify Travis, you need to configure the project. You need to follow the project setup in the same way that we did earlier. To have classes, you are required to have the test suites that you want to run, a bootstrap file, and a phpunit.xml configuration file. You should try this configuration locally to ensure that you can run PHPUnit, execute tests, and make sure that all tests pass. If you cloned the sample project, you will see that there is one important file: .travis.yml. This Travis configuration file is telling Travis what the server configuration should look like, and also what will happen after each commit. Let's have a look at what this file looks like: # see http://about.travis-ci.org/docs/user/languages/php/ for more hints language: php # list any PHP version you want to test against php: - 5.3 - 5.4 # optionally specify a list of environments env: - DB=mysql # execute any number of scripts before the test run, custom env's are available as variables before_script: - if [[ "$DB" == "mysql" ]]; then mysql -e "create database IF NOT EXISTS my_db;" -uroot; fi # omitting "script:" will default to phpunit script: phpunit --configuration phpunit.xml --coverage-text # configure notifications (email, IRC, campfire etc) notifications: email: "your@email" As you can see, the configuration is really simple, and it shows that we need PHP 5.3 and 5.4, and a MySQL database to create a database, execute the PHPUnit with our configuration, and send a report to my e-mail address. After each commit, PHPUnit executes all the tests. The following screenshot shows us an interesting insight into how Travis executes our tests and which environment it uses: You can view the build and the history for all builds. Even though there are no real builds in PHP because PHP is an interpreted language and not compiled, the action performed when you clone a repository, execute PHPUnit tests, and process results is usually called a build. Travis configuration can be much more complex, and you can run Composer to update dependency and much more. Just check the Travis documentation for PHP at http://about.travis-ci.org/docs/user/languages/php/. Using the Jenkins CI server Jenkins is a CI server. The difference between Travis and Jenkins is that when you use Travis as a service, you don't have to worry about the configuration, whereas Jenkins is piece of software that you install on your hardware. This is both an advantage and a disadvantage. The disadvantage is that you have to manually install it, configure it, and also keep it up to date. The advantage is that you can configure it in a way that suits you, and all of the data and code is completely under your control. This can be very important when you have customer code and data (for testing, never use live customer data) or sensitive information that can't be passed on to a third party. The Jenkins project started as a fork of the Hudson project and is written in Java but has many plugins that suit a variety of programming languages, including PHP. In recent years, it has become very popular, and nowadays it is probably the most popular CI server. The reasons for its popularity are that it is really good, can be configured easily, and there are many plugins available that probably cover everything you might need. Installation Installation is a really straightforward process. The easiest method is to use a Jenkins installation package from http://jenkins-ci.org/. There are packages available for Windows, OS X, and Linux, and the installation process is well-documented there. Jenkins is written in Java, which means that Java or OpenJDK is required. After this comes the installation, as you just launch the installation and point it to where it should be installed, and Jenkins is listening on port 8080. Before we move on to configure the first project (or job in Jenkins terminology), we need to install a few extra plugins. This is Jenkins' biggest advantage. There are many plugins and they are very easy to install. It doesn't matter that Jenkins is a Java app as it also serves PHP very well. For our task to execute tests, process results, and send notifications, we need the following plugins: Email-ext: This plugin is used to send notifications Git or Subversion: This plugin is used to check the code xUnit: This plugin is used for processing the PHPUnit test results Clover PHP: This plugin is used for processing the code coverage To install these plugins, navigate to Jenkins | Manage Jenkins | Manage Plugins and select the Available tab. You can find and check the required plugins, or alternatively use the search filter to find the one you need: For e-mails, you might need to configure the STMP server connection at Manage Jenkins | Configure System | E-mail notification section. Usage By now, we should have installed everything that we need, and we can start to configure our first simple project. We can use the same simple project that we used for Travis. This is just one test case, but it is important to learn how to set up a project. It doesn't matter if you have one or thousands of tests though, as the setup is going to be the same. Creating a job The first step is to create a new job. Select New Job from the Jenkins main navigation window, give it a name, and select Build a free-style software project. After clicking on OK, you get to the project configuration page. The most interesting things there are listed as follows: Source Code Management: This is where you check the code Build Triggers: This specifies when to run the build Build: This tests the execution for us Post-build Actions: This publishes results and sends notifications The following screenshot shows the project configuration window in Jenkins CI: Source Code Management Source code management simply refers to your version control system, path to the repository, and the branch/branches to be used. Every build is a clean operation, which means that Jenkins starts with a new directory where the code is checked. Build Triggers Build triggers is an interesting feature. You don't have to use it and you can start to build manually, but it is better to specify when a build should run. It can run periodically at a given interval (every two hours), or you can trigger a build remotely. One way to trigger a build is to use post commit hooks in the Git/SVN repository. A post commit hook is a script that is executed after every commit. Hooks are stored in the repository in the /hooks directory (.git/hooks for Git and /hooks for SVN). What you need to do is create a post-commit (SVN) or post-receive (Git) script that will call the URL given by Jenkins when you click on a Trigger remotely checkbox with a secret token: #!/bin/sh wget http ://localhost:8080/job/Sample_Project/build?token=secret12345ABC-O /dev/null After every commit/push to the repository, Jenkins will receive a request to run the build and execute the tests to check whether all of the tests work and that any code change there is not causing unexpected problems. Build A build is something that might sound weird in the PHP world, as PHP is interpreted and not compiled; so, why do we call it a build? It's just a word. For us, it refers to a main part of the process—to execute unit tests. You have to navigate to Add a build step—click on either Execute Windows batch command or Execute shell. This depends on your operating system, but the command remains the same: phpunit --log-junit=result.xml --coverage-clover=clover.xml This is simple and outputs what we want. It executes tests, stores the results in the JUnit format in the file result.xml, and generates code coverage in the clover format in the file clover.xml. I should probably mention that PHPUnit is not installed with Jenkins, and your build machine on which Jenkins is running must have PHPUnit installed and configured, including PHP CLI. Post-build Actions In our case, there are three post-build actions required. They are listed as follows: Process the test result: This denotes whether the build succeeded or failed. You need to navigate to Add a post-build action | Publish Junit test result report and type result.xml. This matches the switch --log-junit=result.xml. Jenkins will use this file to check the tests results and publish them. Generate code coverage: This is similar to the first step. You have to add the Publish Clover PHP Coverage report field and type clover.xml. It uses a second switch, --coverage-clover=clover.xml, to generate code coverage, and Jenkins uses this file to create a code coverage report. E-mail notification: It is a good idea to send an e-mail when a build fails in order to inform everybody that there is a problem, and maybe even let them know who caused this problem and what the last commit was. This step can be added simply by choosing E-mail notification action. Results The result could be just an e-mail notification, which is handy, but Jenkins also has a very nice dashboard that displays the current status for each job, and you can also see and view the build history to see when and why a build failed. A nice feature is that you can drill down through the test results or code coverage and find more details about test cases and code coverage per class. To make testing even more interesting, you can use Jenkins' The Continuous Integration Game plugin. Every developer receives positive points for written tests and a successful build, and negative points for every build that they broke. The game leaderboard shows who is winning the build game and writing better code.
Read more
  • 0
  • 0
  • 2044

article-image-updating-data-background
Packt
20 May 2014
4 min read
Save for later

Updating data in the background

Packt
20 May 2014
4 min read
(For more resources related to this topic, see here.) Getting ready Create a new Single View Application in Xamarin Studio and name it BackgroundFetchApp. Add a label to the controller. How to do it... Perform the following steps: We need access to the label from outside of the scope of the BackgroundFetchAppViewController class, so create a public property for it as follows: public UILabel LabelStatus { get { return this.lblStatus; } } Open the Info.plist file and under the Source tab, add the UIBackgroundModes key (Required background modes) with the string value, fetch. The following screenshot shows you the editor after it has been set: In the FinishedLaunching method of the AppDelegate class, enter the following line: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); Enter the following code, again, in the AppDelegate class: private int updateCount;public override void PerformFetch (UIApplication application,Action<UIBackgroundFetchResult> completionHandler){try {HttpWebRequest request = WebRequest.Create("http://software.tavlikos.com") as HttpWebRequest;using (StreamReader sr = new StreamReader(request.GetResponse().GetResponseStream())) {Console.WriteLine("Received response: {0}",sr.ReadToEnd());}this.viewController.LabelStatus.Text =string.Format("Update count: {0}/n{1}",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.NewData);} catch {this.viewController.LabelStatus.Text =string.Format("Update {0} failed at {1}!",++updateCount, DateTime.Now);completionHandler(UIBackgroundFetchResult.Failed);}} Compile and run the app on the simulator or on the device. Press the home button (or Command + Shift + H) to move the app to the background and wait for an output. This might take a while, though. How it works... The UIBackgroundModes key with the fetch value enables the background fetch functionality for our app. Without setting it, the app will not wake up in the background. After setting the key in Info.plist, we override the PerformFetch method in the AppDelegate class, as follows: public override void PerformFetch (UIApplication application, Action<UIBackgroundFetchResult> completionHandler) This method is called whenever the system wakes up the app. Inside this method, we can connect to a server and retrieve the data we need. An important thing to note here is that we do not have to use iOS-specific APIs to connect to a server. In this example, a simple HttpWebRequest method is used to fetch the contents of this blog: http://software.tavlikos.com. After we have received the data we need, we must call the callback that is passed to the method, as follows: completionHandler(UIBackgroundFetchResult.NewData); We also need to pass the result of the fetch. In this example, we pass UIBackgroundFetchResult.NewData if the update is successful and UIBackgroundFetchResult.Failed if an exception occurs. If we do not call the callback in the specified amount of time, the app will be terminated. Furthermore, it might get fewer opportunities to fetch the data in the future. Lastly, to make sure that everything works correctly, we have to set the interval at which the app will be woken up, as follows: UIApplication.SharedApplication.SetMinimumBackgroundFetchInterval(UIApplication.BackgroundFetchIntervalMinimum); The default interval is UIApplication.BackgroundFetchIntervalNever, so if we do not set an interval, the background fetch will never be triggered. There's more Except for the functionality we added in this project, the background fetch is completely managed by the system. The interval we set is merely an indication and the only guarantee we have is that it will not be triggered sooner than the interval. In general, the system monitors the usage of all apps and will make sure to trigger the background fetch according to how often the apps are used. Apart from the predefined values, we can pass whatever value we want in seconds. UI updates We can update the UI in the PerformFetch method. iOS allows this so that the app's screenshot is updated while the app is in the background. However, note that we need to keep UI updates to the absolute minimum. Summary Thus, this article covered the things to keep in mind to make use of iOS 7's background fetch feature. Resources for Article: Further resources on this subject: Getting Started on UDK with iOS [Article] Interface Designing for Games in iOS [Article] Linking OpenCV to an iOS project [Article]
Read more
  • 0
  • 0
  • 7007

article-image-configuring-placeholder-datastores
Packt
20 May 2014
1 min read
Save for later

Configuring placeholder datastores

Packt
20 May 2014
1 min read
(For more resources related to this topic, see here.) Assuming that each of these paired sites is geographically separated, each site will have its own placeholder datastore. The following figure shows the site and placeholder datastore relationship: This is how you configure placeholder datastores: Navigate to vCenter Server's inventory home page and click on Site Recovery. Click on Sites in the left pane and select a site. Navigate to the Placeholder Datastores tab and click on Configure Placeholder Datastore, as shown in the following screenshot: In the Configure Placeholder Datastore window, select an appropriate datastore and click on OK. To confirm the selection, exit the window. Now, the Placeholder Datastores tab should show the configured placeholder. Refer to the following screenshot: If you plan to configure a Failback, repeat the procedure in the recovery site. Summary In this article, we covered the steps to be followed in order to configure placeholder datastores. Resources for Article: Further resources on this subject: Disaster Recovery for Hyper-V [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] Disaster Recovery Techniques for End Users [Article]
Read more
  • 0
  • 0
  • 8389

article-image-going-beyond-basics
Packt
19 May 2014
8 min read
Save for later

Going Beyond the Basics

Packt
19 May 2014
8 min read
(For more resources related to this topic, see here.) Chef's declarative language Chef recipes are declarative, which means that it provides a high-level language for describing what to do to accomplish the task at hand without requiring that you provide a specific implementation or procedure. This means that you can focus on building recipes and modeling infrastructure using abstract resources so that it is clear what is happening without having to know how it is happening. Take, as an example, a portion of the recipes we looked at earlier for deploying an IIS application that is responsible for installing some Windows features: features = %w{IIS-ISAPIFilter IIS-ISAPIExtensions NetFx3ServerFeatures NetFx4Extended-ASPNET45 IIS-NetFxExtensibility45} features.each do |f| windows_feature f do action :install end end Because of Chef's declarative language, the preceding section of code reads in a natural way. We have a list of features. For each of those features, which we know to be Windows features, install them. Because of this high-level abstraction, your recipe can describe what is going on without containing all of the logic necessary to do the actual work. If you were to look into the windows cookbook, you would see that there are a number of implementations using DISM, PowerShell, and ServerManagerCmd. Rather than worrying about that in the recipe itself, the logic is deferred to the provider that is selected for the given resource. The feature resource knows that if a host has DISM, it will use the DISM provider; otherwise, it will look for the existence of servermanagercmd.exe and, if it is present, use that as the installation provider. This makes recipes more expressive and much less cluttered. If Chef did not provide this high-level abstraction, your recipe would look more like the following code snippet: features = %w{IIS-ISAPIFilter IIS-ISAPIExtensions NetFx3ServerFeatures NetFx4Extended-ASPNET45 IIS-NetFxExtensibility45} features.each do |f| if ::File.exists?(locate_cmd('dism.exe')) install_via_dism(f) elsif ::File.exists?(locate_ cmd('servermanagercmd.exe')) install_via_servermgrcmd(f) else fail end end def install_via_dism(feature_name) ## some code here to execute DISM end def install_via_servermgrcmd(feature_name) ## some code here to execute servermgrcmd.exe end This, while understandable, significantly increases the overall complexity of your recipes and reduces readability. Now, rather than simply focusing on installing the features, the recipe contains a lot of logic about how to perform the installation. Now, imagine writing a recipe that needs to create files and set ownership on those files and be usable across multiple platforms. Without abstractions, the recipe would contain implementation details of how to determine if a platform is Windows or Linux, how to determine user or group IDs from a string representation, what file permissions look like on different platforms, and so on. However, with the level of abstraction that Chef provides, that recipe would look like the following code snippet: file_names = %w{hello.txt goodbye.txt README.md myfile.txt} file_names.each do |file_name| file file_name action :create owner "someuser" mode 0660 end end Behind the scenes, when the recipe is executed, the underlying providers know how to convert these declarations into system-level commands. Let's take a look at how we could build a single recipe that is capable of installing something on both Windows and Linux. Handling multiple platforms One of Chef's strengths is the ability to integrate Windows hosts with non-Windows hosts. It is important to not only develop recipes and cookbooks that are capable of supporting both Linux and Windows hosts, but also to be able to thoroughly test them before rolling them out into your infrastructure. Let's take a look at how you can support multiple platforms in your recipes as well as use a popular testing framework, ChefSpec, to write tests to test your recipes and validate platformspecific behaviors. Declaring support in your cookbook All Chef cookbooks have a metadata.rb file; this file outlines dependencies, ownership, version data, and compatibility. Compatibility in a homogeneous environment is a less important property—all the hosts run the same operating system. When you are modeling a heterogeneous environment, the ability to describe compatibility is more important; without it, you might try to apply a Windows-only recipe to a Linux host or the other way around. In order to indicate the platforms that are supported by your cookbook, you will want to add one or more supports stanzas to the metadata.rb file. For example, a cookbook that supports Debian and Windows would have two supports statements as follows: supports "windows" supports "debian" However, if you were to support a lot of different platforms, you can always script your configuration. For example, you could use something similar to the following code snippet: %w(windows debian ubuntu redhat fedora).each |os| supports os end Multiplatform recipes In the following code example, we will look at how we could install Apache, a popular web server, on both a Windows and a Debian system inside of a single recipe: if platform_family? 'debian' package 'apache2' elsif platform_family? 'windows' windows_package node['apache']['windows']['service_name'] do source node['apache']['windows']['msi_url'] installer_type :msi # The last four options keep the service from failing # before the httpd.conf file is created options %W[ /quiet INSTALLDIR="#{node['apache']['install_dir']}" ALLUSERS=1 SERVERADMIN=#{node['apache']['serveradmin']} SERVERDOMAIN=#{node['fqdn']} SERVERNAME=#{node['fqdn']} ].join(' ') end end template node['apache']['config_file'] do source "httpd.conf.erb" action :create notifies :restart, "service[apache2]" end # The apache service service "apache2" do if platform_family? 'windows' service_name "Apache#{node['apache']['version']}" end action [ :enable, :start ] end In this example, we perform a very basic installation of the Apache 2.x service on the host. There are no modules enabled, no virtual hosts, or anything else. However, it does allow us to define a recipe that will install Apache, generate an httpd.conf file, and then enable and start the Apache 2 service. You will notice that there is a little bit of platform-specific configuration going on here, first with how to install the package and second with how to enable the service. Because the package resource does not support Windows, the installation of the package on Windows will use the windows_package resource and the package resource on a Debian host. To make this work, we will need some configuration data to apply during installation; skimming over the recipe, we find that we would need a configuration hash similar to the following code snippet: 'config': { 'apache': { 'version': '2.2.48', 'config_file': '/opt/apache2/conf/httpd.conf', 'install_dir': '/opt/apache2', 'serveradmin': 'admin@domain.com', 'windows': { 'service_name': 'Apache 2.x', 'msi_url': 'http://some.url/apache2.msi' } } } This recipe allows our configuration to remain consistent across all nodes; we don't need to override any configuration values to make this work on a Linux or Windows host. You may be saying to yourself "but, /opt/apache2 won't work on Windows". It will, but it is interpreted as optapache2 on the same drive as the Chef client's current working directory; thus, if you ran Chef from C:, it would become c:optapache2. By making our configuration consistent, we do not need to construct any special roles to store our Windows configuration separate from our Linux configuration. If you do not like installing Apache in the same directory on both Linux and Windows hosts, you could easily modify the recipe to have some conditional logic as follows: apache_config_file = if platform_family? 'windows' node['apache']['windows']['config_file'] else node['apache']['linux']['config_file'] end template apache_config_file do source "httpd.conf.erb" action :create notifies :restart, "service[apache2]" end Here, the recipe is made slightly more complicated in order to accommodate the two platforms, but at the benefit of one consistent cross-platform configuration specification. Alternatively, you could use the recipe as it is defined and create two roles, one for Windows Apache servers and the other for Linux Apache servers, each with their own configuration. An apache_windows role may have the following override configuration: 'config': { 'apache': { 'config_file': "C:\Apps\Apache2\Config\httpd.conf", 'install_dir': "C:\Apps\Apache2" } } In contrast, an apache_linux role might have a configuration that looks like the following code snippet: 'config': { 'apache': { 'config_file': "/usr/local/apache2/conf/httpd.conf", 'install_dir': "/usr/local/apache2" } } The impact of this approach is that now you have to maintain separate platformspecific roles. When a host is provisioned or being configured (either through the control panel or via knife), you need to remember that it is a Windows host and therefore has a specific Chef role associated with it. This leads to potential mistakes in host configuration as a result of increased complexity. Summary In this article, we have learned about the declarative language of Chef, we understood the various ways to handle multiple platforms, and also learned about the multiplatform recipes. Resources for Article: Further resources on this subject: Chef Infrastructure [Article] Getting started with using Chef [Article] How to use PowerShell Web Access to manage Windows Server [Article]
Read more
  • 0
  • 0
  • 2056
article-image-sending-data-google-docs
Packt
16 May 2014
9 min read
Save for later

Sending Data to Google Docs

Packt
16 May 2014
9 min read
(For more resources related to this topic, see here.) The first step is to set up a Google Docs spreadsheet for the project. Create a new sheet, give it a name (I named mine Power for this project, but you can name it as you wish), and set a title for the columns that we are going to use: Time, Interval, Power, and Energy (that will be calculated from the first two columns), as shown in the following screenshot: We can also calculate the value of the energy using the other measurements. From theory, we know that over a given period of time, energy is power multiplied by time; that is, Energy = Power * Time. However, in our case, power is calculated at regular intervals, and we want to estimate the energy consumption for each of these intervals. In mathematical terms, this means we need to calculate the integral of power as a function of time. We don't have the exact function between time and power as we sample this function at regular time intervals, but we can estimate this integral using a method called the trapezoidal rule. It means that we basically estimate the integral of the function, which is the area below the power curve, by a trapeze. The energy in the C2 cell in the spreadsheet is then given by the formula: Energy= (PowerMeasurement + NextPowerMeasurement)*TimeInverval/2 Concretely, in Google Docs, you will need the formula, D2 = (B2 + B3)*C2/2. The Arduino Yún board will give you the power measurement, and the time interval is given by the value we set in the sketch. However, the time between two measurements can vary from measurement to measurement. This is due to the delay introduced by the network. To solve this issue, we will transmit the exact value along with the power measurement to get a much better estimate of the energy consumption. Then, it's time to build the sketch that we will use for the project. The goal of this sketch is basically to wait for commands that come from the network, to switch the relay on or off, and to send data to the Google Docs spreadsheet at regular intervals to keep track of the energy consumption. We will build the sketch on top of the sketch we built earlier so I will explain which components need to be added. First, you need to include your Temboo credentials using the following line of code: #include "TembooAccount.h" Since we can't continuously measure the power consumption data (the data transmitted would be huge, and we will quickly exceed our monthly access limit for Temboo!), like in the test sketch, we need to measure it at given intervals only. However, at the same time, we need to continuously check whether a command is received from the outside to switch the state of the relay. This is done by setting the correct timings first, as shown in the following code: int server_poll_time = 50; int power_measurement_delay = 10000; int power_measurement_cycles_max = power_measurement_delay/server_ poll_time; The server poll time will be the interval at which we check the incoming connections. The power measurement delay, as you can guess, is the delay at which the power is measured. However, we can't use a simple delay function for this as it will put the entire sketch on hold. What we are going to do instead is to count the number of cycles of the main loop and then trigger a measurement when the right amount of cycles have been reached using a simple if statement. The right amount of cycles is given by the power measurement cycles_max variable. You also need to insert your Google Docs credentials using the following lines of code: const String GOOGLE_USERNAME = "yourGoogleUsername"; const String GOOGLE_PASSWORD = "yourGooglePass"; const String SPREADSHEET_TITLE = "Power"; In the setup() function, you need to start a date process that will keep a track of the measurement date. We want to keep a track of the measurement over several days, so we will transmit the date of the day as well as the time, as shown in the following code: time = millis(); if (!date.running()) { date.begin("date"); date.addParameter("+%D-%T"); date.run(); } In the loop() function of the sketch, we check whether it's time to perform a measurement from the current sensor, as shown in the following line of code: if (power_measurement_cycles > power_measurement_cycles_max); If that's the case, we measure the sensor value, as follows: float sensor_value = getSensorValue(); We also get the exact measurement interval that we will transmit along with the measured power to get a correct estimate of the energy consumption, as follows: measurements_interval = millis() - last_measurement; last_measurement = millis(); We then calculate the effective power from the data we already have. The amplitude of the current is obtained from the sensor measurements. Then, we can get the effective value of the current by dividing this amplitude by the square root of 2. Finally, as we know the effective voltage and that power is current multiplied by voltage, we can calculate the effective power as well, as shown in the following code: // Convert to current amplitude_current=(float)(sensor_value-zero_ sensor)/1024*5/185*1000000; effectivevalue=amplitude_current/1.414; // Calculate power float effective_power = abs(effective_value * effective_voltage/1000); After this, we send the data with the time interval to Google Docs and reset the counter for power measurements, as follows: runAppendRow(measurements_interval,effective_power); power_measurement_cycles = 0; Let's quickly go into the details of this function. It starts by declaring the type of Temboo library we want to use, as follows: TembooChoreo AppendRowChoreo; Start with the following line of code: AppendRowChoreo.begin(); We then need to set the data that concerns your Google account, for example, the username, as follows: AppendRowChoreo.addInput("Username", GOOGLE_USERNAME); The actual formatting of the data is done with the following line of code: data = data + timeString + "," + String(interval) + "," + String(effectiveValue); Here, interval is the time interval between two measurements, and effectiveValue is the value of the measured power that we want to log on to Google Docs. The Choreo is then executed with the following line of code: AppendRowChoreo.run(); Finally, we do this after every 50 milliseconds and get an increment to the power measurement counter each time, as follows: delay(server_poll_time); power_measurement_cycles++; The complete code is available at https://github.com/openhomeautomation/geeky-projects-yun/tree/master/chapter2/energy_log. The code for this part is complete. You can now upload the sketch and after that, open the Google Docs spreadsheet and then just wait until the first measurement arrives. The following screenshot shows the first measurement I got: After a few moments, I got several measurements logged on my Google Docs spreadsheet. I also played a bit with the lamp control by switching it on and off so that we can actually see changes in the measured data. The following screenshot shows the first few measurements: It's good to have some data logged in the spreadsheet, but it is even better to display this data in a graph. I used the built-in plotting capabilities of Google Docs to plot the power consumption over time on a graph, as shown in the following screenshot: Using the same kind of graph, you can also plot the calculated energy consumption data over time, as shown in the following screenshot: From the data you get in this Google Docs spreadsheet, it is also quite easy to get other interesting data. You can, for example, estimate the total energy consumption over time and the price that it will cost you. The first step is to calculate the sum of the energy consumption column using the integrated sum functionality of Google Docs. Then, you have the energy consumption in Joules, but that's not what the electricity company usually charges you for. Instead, they use kWh, which is basically the Joule value divided by 3,600,000. The last thing we need is the price of a single kWh. Of course, this will depend on the country you're living in, but at the time of writing this article, the price in the USA was approximately $0.16 per kWh. To get the total price, you then just need to multiply the total energy consumption in kWh with the price per kWh. This is the result with the data I recorded. Of course, as I only took a short sample of data, it cost me nearly nothing in the end, as shown in the following screenshot: You can also estimate the on/off time of the device you are measuring. For this purpose, I simply added an additional column next to Energy named On/Off. I simply used the formula =IF(C2<2;0;1). It means that if the power is less than 2W, we count it as an off state; otherwise, we count it as an on state. I didn't set the condition to 0W to count it as an off state because of the small fluctuations over time from the current sensor. Then, when you have this data about the different on/off states, it's quite simple to count the number of occurrences of each state, for example, on states, using =COUNTIF(E:E,"1"). I applied these formulas in my Google Docs spreadsheet, and the following screenshot is the result with the sample data I recorded: It is also very convenient to represent this data in a graph. For this, I used a pie chart, which I believe is the most adaptable graph for this kind of data. The following screenshot is what I got with my measurements: With the preceding kind of chart, you can compare the usage of a given lamp from day to day, for example, to know whether you have left the lights on when you are not there. Summary In this article, we learned to send data to Google docs, measure the energy consumption, and store this data to the Web. Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Playing with Max 6 Framework [Article] Our First Project – A Basic Thermometer [Article]
Read more
  • 0
  • 0
  • 9124

article-image-adding-geolocation-trigger-salesforce-account-object
Packt
16 May 2014
8 min read
Save for later

Adding a Geolocation Trigger to the Salesforce Account Object

Packt
16 May 2014
8 min read
(For more resources related to this topic, see here.) Obtaining the Google API key First, you need to obtain an API key for the Google Geocoding API: Visit https://code.google.com/apis/console and sign in with your Google account (assuming you already have one). Click on the Create Project button. Enter My Salesforce Account Project for the Project name. Accept the default value for the Project ID. Click on Create. Click on APIs & auth from the left-hand navigation bar. Set the Geocoding API to ON. Select Credentials and click on CREATE NEW KEY. Click on the Browser Key button. Click on Create to generate the key. Make a note of the API key. Adding a Salesforce remote site Now, we need to add a Salesforce remote site for the Google Maps API: Navigate to Setup | Security Controls | Remote Site Settings. Click on the New Remote Site button. Enter Google_Maps_API for the Remote Site Name. Enter https://maps.googleapis.com for the Remote Site URL. Ensure that the Active checkbox is checked. Click on Save. Your remote site detail should resemble the following screenshot: Adding the Location custom field to Account Next, we need to add a Location field to the Account object: Navigate to Setup | Customize | Accounts | Fields. Click on the New button in the Custom Fields & Relationships section. Select Geolocation for the Data Type. Click on Next. Enter Location for the Field Label. The Field Name should also default to Location. Select Decimal for the Latitude and Longitude Display Notation. Enter 7 for the Decimal Places. Click on Next. Click on Next to accept the defaults for Field-Level Security. Click on Save to add the field to all account related page layouts. Adding the Apex Utility Class Next, we need an Apex utility class to geocode an address using the Google Geocoding API: Navigate to Setup | Develop | Apex Classes. All of the Apex classes for your organization will be displayed. Click on Developer Console. Navigate to File | New | Apex Class. Enter AccountGeocodeAddress for the Class Name and click on OK. Enter the following code into the Apex Code Editor in your Developer Console window: // static variable to determine if geocoding has already occurred private static Boolean geocodingCalled = false; // wrapper method to prevent calling future methods from an existing future context public static void DoAddressGeocode(id accountId) {   if (geocodingCalled || System.isFuture()) {     System.debug(LoggingLevel.WARN, '***Address Geocoding Future Method Already Called - Aborting...');     return;   }   // if not being called from future context, geocode the address   geocodingCalled = true;   geocodeAddress(accountId); } The AccountGeocodeAddress method and public static variable geocodingCalled protect us from a potential error where a future method may be called from within a future method that is already executing. If this isn't the case, we call the geocodeAddress method that is defined next. Enter the following code into the Apex Code Editor in your Developer Console window: // we need a future method to call Google Geocoding API from Salesforce @future (callout=true) static private void geocodeAddress(id accountId) {   // Key for Google Maps Geocoding API   String geocodingKey = '[Your API Key here]';   // get the passed in address   Account geoAccount = [SELECT BillingStreet, BillingCity, BillingState, BillingCountry, BillingPostalCode     FROM Account     WHERE id = :accountId];       // check that we have enough information to geocode the address   if ((geoAccount.BillingStreet == null) || (geoAccount.BillingCity == null)) {     System.debug(LoggingLevel.WARN, 'Insufficient Data to Geocode Address');     return;   }   // create a string for the address to pass to Google Geocoding API   String geoAddress = '';   if (geoAccount.BillingStreet != null)     geoAddress += geoAccount.BillingStreet + ', ';   if (geoAccount.BillingCity != null)     geoAddress += geoAccount.BillingCity + ', ';   if (geoAccount.BillingState != null)     geoAddress += geoAccount.BillingState + ', ';   if (geoAccount.BillingCountry != null)     geoAddress += geoAccount.BillingCountry + ', ';   if (geoAccount.BillingPostalCode != null)     geoAddress += geoAccount.BillingPostalCode;     // encode the string so we can pass it as part of URL   geoAddress = EncodingUtil.urlEncode(geoAddress, 'UTF-8');   // build and make the callout to the Geocoding API   Http http = new Http();   HttpRequest request = new HttpRequest();   request.setEndpoint('https://maps.googleapis.com/maps/api/geocode/json?address='     + geoAddress + '&key=' + geocodingKey     + '&sensor=false');   request.setMethod('GET');   request.setTimeout(60000);   try {     // make the http callout     HttpResponse response = http.send(request);     // parse JSON to extract co-ordinates     JSONParser responseParser = JSON.createParser(response.getBody());     // initialize co-ordinates     double latitude = null;     double longitude = null;     while (responseParser.nextToken() != null) {       if ((responseParser.getCurrentToken() == JSONToken.FIELD_NAME) &&       (responseParser.getText() == 'location')) {         responseParser.nextToken();         while (responseParser.nextToken() != JSONToken.END_OBJECT) {           String locationText = responseParser.getText();           responseParser.nextToken();           if (locationText == 'lat')             latitude = responseParser.getDoubleValue();           else if (locationText == 'lng')             longitude = responseParser.getDoubleValue();         }       }     }     // update co-ordinates on address if we get them back     if (latitude != null) {       geoAccount.Location__Latitude__s = latitude;       geoAccount.Location__Longitude__s = longitude;       update geoAccount;     }   } catch (Exception e) {     System.debug(LoggingLevel.ERROR, 'Error Geocoding Address - ' + e.getMessage());   } } Insert your Google API key in the following line of code: String geocodingKey = '[Your API Key here]'; Navigate to File | Save. Adding the Apex Trigger Finally, we need to implement an Apex trigger class to geocode the Billing Address when an Account is added or updated Navigate to Setup | Develop | Apex Triggers. All of the Apex triggers for your organization will be displayed. Click on Developer Console. Navigate to File | New | Apex Trigger in the Developer Console. Enter geocodeAccountAddress in the Name field. Select Account in the Objects dropdown list and click on Submit. Enter the following code into the Apex Code Editor in your Developer Console window: trigger geocodeAccountAddress on Account (after insert, after update) {       // bulkify trigger in case of multiple accounts   for (Account account : trigger.new) {       // check if Billing Address has been updated     Boolean addressChangedFlag = false;     if (Trigger.isUpdate) {       Account oldAccount = Trigger.oldMap.get(account.Id);       if ((account.BillingStreet != oldAccount.BillingStreet) ||       (account.BillingCity != oldAccount.BillingStreet) ||         (account.BillingCountry != oldAccount.BillingCountry) ||         (account.BillingPostalCode != oldAccount.BillingPostalCode)) {           addressChangedFlag = true;           System.debug(LoggingLevel.DEBUG, '***Address changed for - ' + oldAccount.Name);       }     }     // if address is null or has been changed, geocode it     if ((account.Location__Latitude__s == null) || (addressChangedFlag == true)) {       System.debug(LoggingLevel.DEBUG, '***Geocoding Account - ' + account.Name);       AccountGeocodeAddress.DoAddressGeocode(account.id);     }   } } Navigate to File | Save. The after insert / after update account trigger itself is relatively simple. If the Location field is blank, or the Billing Address has been updated, a call is made to the AccountGeocodeAddress.DoAddressGeocode method to geocode the address against the Google Maps Geocoding API. Summary Congratulations, you have now completed the Geolocation trigger for your Salesforce Account Object. With this, we can calculate distances between two objects in Salesforce or search for accounts/contacts within a certain radius. Resources for Article: Further resources on this subject: Learning to Fly with Force.com [Article] Salesforce CRM Functions [Article] Force.com: Data Management [Article]
Read more
  • 0
  • 2
  • 14143

article-image-optimizing-magento-performance-using-hhvm
Packt
16 May 2014
5 min read
Save for later

Optimizing Magento Performance — Using HHVM

Packt
16 May 2014
5 min read
(For more resources related to this topic, see here.) HipHop Virtual Machine As we can write a whole book (or two) about HHVM, we will just give the key ideas here. HHVM is a virtual machine that will translate any called PHP file into a HHVM byte code in the same spirit as the Java or .NET virtual machine. HHVM transforms your PHP code into a lower level language that is much faster to execute. Of course, the transformation time (compiling) does cost a lot of resources, therefore, HHVM is shipped with a cache mechanism similar to APC. This way, the compiled PHP files are stored and reused when the original file is requested. With HHVM, you keep the PHP flexibility and ease in writing, but you now have a performance like that of C++. Hear the words of the HHVM team at Facebook: "HHVM (aka the HipHop Virtual Machine) is a new open-source virtual machine designed for executing programs written in PHP. HHVM uses a just-in-time compilation approach to achieve superior performance while maintaining the flexibility that PHP developers are accustomed to. To date, HHVM (and its predecessor HPHPc) has realized over a 9x increase in web request throughput and over a 5x reduction in memory consumption for Facebook compared with the Zend PHP 5.2 engine + APC. HHVM can be run as a standalone webserver (in other words, without the Apache webserver and the "modphp" extension). HHVM can also be used together with a FastCGI-based webserver, and work is in progress to make HHVM work smoothly with Apache." If you think this is too good to be true, you're right! Indeed, HHVM have a major drawback. HHVM was and still is focused on the needs of Facebook. Therefore, you might have a bad time trying to use your custom made PHP applications inside it. Nevertheless, this opportunity to speed up large PHP applications has been seen by talented developers who improve it, day after day, in order to support more and more framework. As our interest is in Magento, I will introduce you Daniel Sloof who is a developer from Netherland. More interestingly, Daniel has done (and still does) an amazing work at adapting HHVM for Magento. Here are the commands to install Daniel Sloof's version of HHVM for Magento: $ sudo apt-get install git $ git clone https://github.com/danslo/hhvm.git $ sudo chmod +x configure_ubuntu_12.04.sh $ sudo ./configure_ubuntu_12.04.sh $ sudo CMAKE_PREFIX_PATH=`pwd`/.. make If you thought that the first step was long, you will be astonished by the time required to actually build HHVM. Nevertheless, the wait is definitely worth it. The following screenshot shows how your terminal will look for the next hour or so: Create a file named hhvm.hdf under /etc/hhvm and write the following code inside: Server { Port = 80 SourceRoot = /var/www/_MAGENTO_HOME_ } Eval { Jit = true } Log { Level = Error UseLogFile = true File = /var/log/hhvm/error.log Access { * { File = /var/log/hhvm/access.log Format = %h %l %u %t \"%r\" %>s %b } } } VirtualHost { * { Pattern = .* RewriteRules { dirindex { pattern = ^/(.*)/$ to = $1/index.php qsa = true } } } } StaticFile { FilesMatch { * { pattern = .*\.(dll|exe) headers { * = Content-Disposition: attachment } } } Extensions { css = text/css gif = image/gif html = text/html jpe = image/jpeg jpeg = image/jpeg jpg = image/jpeg png = image/png tif = image/tiff tiff = image/tiff txt = text/plain } } Now, run the following command: $ sudo ./hhvm –mode daemon –config /etc/hhvm.hdf The hhvm executable is under hhvm/hphp/hhvm. Is all of this worth it? Here's the response: ab -n 100 -c 5 http://192.168.0.105192.168.0.105/index.php/furniture/livingroom.html Server Software: Server Hostname: 192.168.0.105192.168.0.105 Server Port: 80 Document Path: /index.php/furniture/living-room.html Document Length: 35552 bytes Concurrency Level: 5 Time taken for tests: 4.970 seconds Requests per second: 20.12 [#/sec] (mean) Time per request: 248.498 [ms] (mean) Time per request: 49.700 [ms] (mean, across all concurrent requests) Transfer rate: 707.26 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 12.1 0 89 Processing: 107 243 55.9 243 428 Waiting: 107 242 55.9 242 427 Total: 110 245 56.7 243 428 We literally reach a whole new world here. Indeed, our Magento instance is six times faster than after all our previous optimizations and about 20 times faster than the default Magento served by Apache. The following graph shows the performances: Our Magento instance is now flying at lightening speed, but what are the drawbacks? Is it still as stable as before? All the optimization we did so far, are they still effective? Can we go even further? In what follows, we present a non-exhaustive list of answers: Fancy extensions and modules may (and will) trigger HHVM incompatibilities. Magento is a relatively old piece of software and combining it with a cutting edge technology such as HHVM can have some unpredictable (and undesirable) effects. HHVM is so complex that fixing a Magento-related bug requires a lot of skill and dedication. HHVM takes care of PHP, not of cache mechanisms or the accelerator we installed before. Therefore, APC, memcached, and Varnish are still running and helping to improve our performances. If you become addicted to performances, HHVM is now supporting Fast-CGI through Nginx and Apache. You can find out more about that at: http://www.hhvm.com/blog/1817/fastercgi-with-hhvm. Summary In this article, we successfully used the HipHop Virtual Machine (HHVM) from Facebook to serve Magento. This improvement optimizes our Magento performance incredibly (20 times faster), that is, the time required initially was 110 seconds while now it is less then 5 seconds. Resources for Article: Further resources on this subject: Magento: Exploring Themes [article] Getting Started with Magento Development [article] Enabling your new theme in Magento [article] Call Send SMS Add to Skype You'll need Skype CreditFree via Skype
Read more
  • 0
  • 0
  • 8858
article-image-saving-data-create-longer-games
Packt
15 May 2014
6 min read
Save for later

Saving Data to Create Longer Games

Packt
15 May 2014
6 min read
(For more resources related to this topic, see here.) Creating collectibles to save The Unity engine features an incredibly simple saving and loading system that can load your data between sessions in just a few lines of code. The downside of using Unity's built-in data management is that save data will be erased if the game is ever uninstalled from the OUYA console. Later, we'll talk about how to make your data persistent between installations, but for now, we'll set up some basic data-driven values in your marble prototype. However, before we load the saved data, we have to create something to save. Time for action – creating a basic collectible Some games use save data to track the total number of times the player has obtained a collectible item. Players may not feel like it's worth gathering collectibles if they disappear when the game session is closed, but making the game track their long-term progress can give players the motivation to explore a game world and discover everything it has to offer. We're going to add collectibles to the marble game prototype you created and save them so that the player can see how many collectibles they've totally gathered over every play session. Perform the following steps to create a collectible: Open your RollingMarble Unity project and double-click on the scene that has your level in it. Create a new cylinder from the Create menu in your Hierarchy menu. Move the cylinder so that it rests on the level's platform. It should appear as shown in the following screenshot: We don't want our collectible to look like a plain old cylinder, so manipulate it with the rotate and scale tools until it looks a little more like a coin. Obviously, you'll have a coin model in the final game that you can load, but we can customize and differentiate primitive objects for the purpose of our prototype. Our primitive is starting to look like a coin, but it's still a bland gray color. To make it look a little bit nicer, we'll use Unity to apply a material. A material tells the engine how an object should appear when it is rendered, including which textures and colors to use for each object. Right now, we'll only apply a basic color, but later on we'll see how it can store different kinds of textures and other data. Materials can be created and customized in a matter of minutes in Unity, and they're a great way to color simple objects or distinguish primitive shapes from one another. Create a new folder named Materials in your Project window and right-click on it to create a new material named CoinMaterial as shown in the following screenshot: Click on the material that you just created and its properties will appear in the Inspector window. Click on the color box next to the Main Color property and change it to a yellow color. The colored sphere in the Material window will change to reflect how the material will look in real time, as shown in the following screenshot: Our collectible coin now has a color, but as we can see from the preview of the sphere, it's still kind of dull. We want our coin to be shiny so that it catches the player's eye, so we'll change the Shader type, which dictates how light hits the object. The current Shader type on our coin material is Diffuse, which basically means it is a softer, nonreflective material. To make the coin shiny, change the Shader type to Specular. You'll see a reflective flare appear on the sphere preview; adjust the Shininess slider to see how different levels of specularity affect the material. You may have noticed that another color value was added when you changed the material's shader from Diffuse to Specular; this value affects only the shiny parts of the object. You can make the material shine brighter by changing it from gray to white, or give its shininess a tint by using a completely new color. Attach your material to the collectible object by clicking-and-dragging the material from the Project window and releasing it over the object in your scene view. The object will look like the one shown in the following screenshot: Our collectible coin object now has a unique shape and appearance, so it's a good idea to save it as a prefab. Create a Prefabs folder in your Project window if you haven't already, and use the folder's right-click menu to create a new blank prefab named Coin. Click-and-drag the coin object from the hierarchy to the prefab to complete the link. We'll add code to the coin later, but we can change the prefab after we initially create it, so don't worry about saving an incomplete collectible. Verify whether the prefab link worked by clicking-and-dragging multiple instances of the prefab from the Project window onto the Scene view. What just happened? Until you start adding 3D models to your game, primitives are a great way to create placeholder objects, and materials are useful for making them look more complex and unique. Materials add color to objects, but they also contain a shader that affects the way light hits the object. The two most basic shaders are Diffuse (dull) and Specular (shiny), but there are several other shaders in Unity that can help make your object appear exactly like you want it. You can even code your own shaders using the ShaderLab language, which you can learn on your own using the documentation at http://docs.unity3d.com/Documentation/Components/SL-Reference.html. Next, we'll add some functionality to your coin to save the collection data. Have a go hero – make your prototype stand out with materials As materials are easy to set up with Unity's color picker and built-in shaders, you have a lot of options at your fingertips to quickly make your prototype stand out and look better than a basic grayscale mock-up. Take any of your existing projects and see how far you can push the aesthetic with different combinations of colors and materials. Keep the following points in mind: Some shaders, such as Specular, have multiple colors that you can assign. Play around with different combinations to create a unique appearance. There are more shaders available to you than just the ones loaded into a new project; move your mouse over the Import Package option in Unity's Assets menu and import the Toon Shading package to add even more options to your shader collection. Complex object prefabs made of more than one primitive can have a different material on each primitive. Add multiple materials to a single object to help your user differentiate between its various parts and give your scene more detail. Try changing the materials used in your scene until you come up with something unique and clean, as shown in the following screenshot of our cannon prototype with custom materials:
Read more
  • 0
  • 0
  • 4942

article-image-building-simple-blog
Packt
15 May 2014
8 min read
Save for later

Building a Simple Blog

Packt
15 May 2014
8 min read
(For more resources related to this topic, see here.) Setting up the application Every application has to be set up, so we'll begin with that. Create a folder for your project—I'll call mine simpleBlog—and inside that, create a file named package.json. If you've used Node.js before, you know that the package.json file describes the project; lists the project home page, repository, and other links; and (most importantly for us) outlines the dependencies for the application. Here's what the package.json file looks like: { "name": "simple-blog", "description": "This is a simple blog.", "version": "0.1.0", "scripts": { "start": "nodemon server.js" }, "dependencies": { "express": "3.x.x", "ejs" : "~0.8.4", "bourne" : "0.3" }, "devDependencies": { "nodemon": "latest" } } This is a pretty bare-bones package.json file, but it has all the important bits. The name, description, and version properties should be self-explanatory. The dependencies object lists all the npm packages that this project needs to run: the key is the name of the package and the value is the version. Since we're building an ExpressJS backend, we'll need the express package. The ejs package is for our server-side templates and bourne is our database (more on this one later). The devDependencies property is similar to the dependencies property, except that these packages are only required for someone working on the project. They aren't required to just use the project. For example, a build tool and its components, such as Grunt, would be development dependencies. We want to use a package called nodemon. This package is really handy when building a Node.js backend: we can have a command line that runs the nodemon server.js command in the background while we edit server.js in our editor. The nodemon package will restart the server whenever we save changes to the file. The only problem with this is that we can't actually run the nodemon server.js command on the command line, because we're going to install nodemon as a local package and not a global process. This is where the scripts property in our package.json file comes in: we can write simple script, almost like a command-line alias, to start nodemon for us. As you can see, we're creating a script called start, and it runs nodemon server.js. On the command line, we can run npm start; npm knows where to find the nodemon binary and can start it for us. So, now that we have a package.json file, we can install the dependencies we've just listed. On the command line, change to the current directory to the project directory, and run the following command: npm install You'll see that all the necessary packages will be installed. Now we're ready to begin writing the code. Starting with the server I know you're probably eager to get started with the actual Backbone code, but it makes more sense for us to start with the server code. Remember, good Backbone apps will have strong server-side components, so we can't ignore the backend completely. We'll begin by creating a server.js file in our project directory. Here's how that begins: var express = require('express'); var path = require('path'); var Bourne = require("bourne"); If you've used Node.js, you know that the require function can be used to load Node.js components (path) or npm packages (express and bourne). Now that we have these packages in our application, we can begin using them as follows: var app = express(); var posts = new Bourne("simpleBlogPosts.json"); var comments = new Bourne("simpleBlogComments.json"); The first variable here is app. This is our basic Express application object, which we get when we call the express function. We'll be using it a lot in this file. Next, we'll create two Bourne objects. As I said earlier, Bourne is the database we'll use in our projects in this article. This is a simple database that I wrote specifically for this article. To keep the server side as simple as possible, I wanted to use a document-oriented database system, but I wanted something serverless (for example, SQLite), so you didn't have to run both an application server and a database server. What I came up with, Bourne, is a small package that reads from and writes to a JSON file; the path to that JSON file is the parameter we pass to the constructor function. It's definitely not good for anything bigger than a small learning project, but it should be perfect for this article. In the real world, you can use one of the excellent document-oriented databases. I recommend MongoDB: it's really easy to get started with, and has a very natural API. Bourne isn't a drop-in replacement for MongoDB, but it's very similar. You can check out the simple documentation for Bourne at https://github.com/andrew8088/bourne. So, as you can see here, we need two databases: one for our blog posts and one for comments (unlike most databases, Bourne has only one table or collection per database, hence the need for two). The next step is to write a little configuration for our application: app.configure(function(){ app.use(express.json()); app.use(express.static(path.join(__dirname, 'public'))); }); This is a very minimal configuration for an Express app, but it's enough for our usage here. We're adding two layers of middleware to our application; they are "mini-programs" that the HTTP requests that come to our application will run through before getting to our custom functions (which we have yet to write). We add two layers here: the first is express.json(), which parses the JSON requests bodies that Backbone will send to the server; the second is express.static(), which will statically serve files from the path given as a parameter. This allows us to serve the client-side JavaScript files, CSS files, and images from the public folder. You'll notice that both these middleware pieces are passed to app.use(), which is the method we call to choose to use these pieces. You'll notice that we're using the path.join() method to create the path to our public assets folder, instead of just doing __dirname and 'public'. This is because Microsoft Windows requires the separating slashes to be backslashes. The path.join() method will get it right for whatever operating system the code is running on. Oh, and __dirname (two underscores at the beginning) is just a variable for the path to the directory this script is in. The next step is to create a route method: app.get('/*', function (req, res) { res.render("index.ejs"); }); In Express, we can create a route calling a method on the app that corresponds to the desired HTTP verb (get, post, put, and delete). Here, we're calling app.get() and we pass two parameters to it. The first is the route; it's the portion of the URL that will come after your domain name. In our case, we're using an asterisk, which is a catchall; it will match any route that begins with a forward slash (which will be all routes). This will match every GET request made to our application. If an HTTP request matches the route, then a function, which is the second parameter, will be called. This function takes two parameters; the first is the request object from the client and the second is the response object that we'll use to send our response back. These are often abbreviated to req and res, but that's just a convention, you could call them whatever you want. So, we're going to use the res.render method, which will render a server-side template. Right now, we're passing a single parameter: the path to the template file. Actually, it's only part of the path, because Express assumes by default that templates are kept in a directory named views, a convention we'll be using. Express can guess the template package to use based on the file extension; that's why we don't have to select EJS as the template engine anywhere. If we had values that we want to interpolate into our template, we would pass a JavaScript object as the second parameter. We'll come back and do this a little later. Finally, we can start up our application; I'll choose to use the port 3000: app.listen(3000); We'll be adding a lot more to our server.js file later, but this is what we'll start with. Actually, at this point, you can run npm start on the command line and open up http://localhost:3000 in a browser. You'll get an error because we haven't made the view template file yet, but you can see that our server is working.
Read more
  • 0
  • 0
  • 4061
Modal Close icon
Modal Close icon