Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-c-ngui
Packt
29 Dec 2014
23 min read
Save for later

C# with NGUI

Packt
29 Dec 2014
23 min read
In this article by Charles Pearson, the author of Learning NGUI for Unity, we will talk about C# scripting with NGUI. We will learn how to handle events and interact with them through code. We'll use them to: Play tweens with effects through code Implement a localized tooltip system Localize labels through code Assign callback methods to events using both code and the Inspector view We'll learn many more useful C# tips throughout the book. Right now, let's start with events and their associated methods. (For more resources related to this topic, see here.) Events When scripting in C# with the NGUI plugin, some methods will often be used. For example, you will regularly need to know if an object is currently hovered upon, pressed, or clicked. Of course, you could code your own system—but NGUI handles that very well, and it's important to use it at its full potential in order to gain development time. Available methods When you create and attach a script to an object that has a collider on it (for example, a button or a 3D object), you can add the following useful methods within the script to catch events: OnHover(bool state): This method is called when the object is hovered or unhovered. The state bool gives the hover state; if state is true, the cursor just entered the object's collider. If state is false, the cursor has just left the collider's bounds. OnPress(bool state): This method works in the exact same way as the previous OnHover() method, except it is called when the object is pressed. It also works for touch-enabled devices. If you need to know which mouse button was used to press the object, use the UICamera.currentTouchID variable; if this int is equal to -1, it's a left-click. If it's equal to -2, it's a right-click. Finally, if it's equal to -3, it's a middle-click. OnClick(): This method is similar to OnPress(), except that this method is exclusively called when the click is validated, meaning when an OnPress(true) event occurs followed by an OnPress(false) event. It works with mouse click and touch (tap). In order to handle double clicks, you can also use the OnDoubleClick() method, which works in the same way. OnDrag(Vector2 delta): This method is called at each frame when the mouse or touch moves between the OnPress(true) and OnPress(false) events. The Vector2 delta argument gives you the object's movement since the last frame. OnDrop(GameObject droppedObj): This method is called when an object is dropped on the GameObject on which this script is attached. The dropped GameObject is passed as the droppedObj parameter. OnSelect(): This method is called when the user clicks on the object. It will not be called again until another object is clicked on or the object is deselected (click on empty space). OnTooltip(bool state): This method is called when the cursor is over the object for more than the duration defined by the Tooltip Delay inspector parameter of UICamera. If the Sticky Tooltip option of UICamera is checked, the tooltip remains visible until the cursor moves outside the collider; otherwise, it disappears as soon as the cursor moves. OnScroll(float delta): This method is called when the mouse's scroll wheel is moved while the object is hovered—the delta parameter gives you the amount and direction of the scroll. If you attach your script on a 3D object to catch these events, make sure it is on a layer included in Event Mask of UICamera. Now that we've seen the available event methods, let's see how they are used in a simple example. Example To illustrate when these events occur and how to catch them, you can create a new EventTester.cs script with the following code: void OnHover(bool state) { Debug.Log(this.name + " Hover: " + state); }   void OnPress(bool state) { Debug.Log(this.name + " Pressed: " + state); }   void OnClick() { Debug.Log(this.name + " Clicked"); }   void OnDrag(Vector2 delta) { Debug.Log(this.name + " Drag: " + delta); }   void OnDrop(GameObject droppedObject) { Debug.Log(droppedObject.name + " dropped on " + this.name); }   void OnSelect(bool state) { Debug.Log(this.name + " Selected: " + state); }   void OnTooltip(bool state) { Debug.Log("Show " + this.name + "'s Tooltip: " + state); }   void OnScroll(float delta) { Debug.Log("Scroll of " + delta + " on " + this.name); } The above highlighted lines are the event methods we discussed, implemented with their respective necessary arguments. Attach our Event Tester component now to any GameObject with a collider, like our Main | Buttons | Play button. Hit Unity's play button. From now on, events that occur on the object they're attached to are now tracked in the Console output:   I recommend that you keep the EventTester.cs script in a handy file directory as a reminder for available event methods in the future. Indeed, for each event, you can simply replace the Debug.Log() lines with the instructions you need. Now we know how to catch events through code. Let's use them to display a tooltip! Creating tooltips Let's use the OnTooltip() event to show a tooltip for our buttons and different options, as shown in the following screenshot:   The tooltip object shown in the preceding screenshot, which we are going to create, is composed of four elements: Tooltip: The tooltip container, with the Tooltip component attached. Background: The background sprite that wraps around Label. Border: A yellow border that wraps around Background. Label: The label that displays the tooltip's text. We will also make sure the tooltip is localized using NGUI methods. The tooltip object In order to create the tooltip object, we'll first create its visual elements (widgets), and then we'll attach the Tooltip component to it in order to define it as NGUI's tooltip. Widgets First, we need to create the tooltip object's visual elements: Select our UI Root GameObject in the Hierarchy view. Hit Alt + Shift + N to create a new empty child GameObject. Rename this new child from GameObject to Tooltip. Add the NGUI Panel (UIPanel) component to it. Set this new Depth of UIPanel to 10. In the preceding steps, we've created the tooltip container. It has UIPanel with a Depth value of 10 in order to make sure our tooltip will remain on top of other panels. Now, let's create the faintly transparent background sprite: With Tooltip selected, hit Alt + Shift + S to create a new child sprite. Rename this new child from Sprite to Background. Select our new Tooltip | Background GameObject, and configure UISprite, as follows: Perform the following steps: Make sure Atlas is set to Wooden Atlas. Set Sprite to the Window sprite. Make sure Type is set to Sliced. Change Color Tint to {R: 90, G: 70, B: 0, A: 180}. Set Pivot to top-left (left arrow + up arrow). Change Size to 500 x 85. Reset its Transform position to {0, 0, 0}. Ok, we can now easily add a fully opaque border with the following trick: With Tooltip | Background selected, hit Ctrl + D to duplicate it. Rename this new duplicate to Border. Select Tooltip | Border and configure its attached UI Sprite, as follows:   Perform the following steps: Disable the Fill Center option. Change Color Tint to {R: 255, G: 220, B: 0, A: 255}. Change the Depth value to 1. Set Anchors Type to Unified. Make sure the Execute parameter is set to OnUpdate. Drag Tooltip | Background in to the new Target field. By not filling the center of the Border sprite, we now have a yellow border around our background. We used anchors to make sure this border always wraps the background even during runtime—thanks to the Execute parameter set to OnUpdate. Right now, our Game and Hierarchy views should look like this:   Let's create the tooltip's label. With Tooltip selected, hit Alt + Shift + L to create a new label. For the new Label GameObject, set the following parameters for UILabel:   Set Font Type to NGUI, and Font to Arimo20 with a size of 40. Change Text to [FFCC00]This[FFFFFF] is a tooltip. Change Overflow to ResizeHeight. Set Effect to Outline, with an X and Y of 1 and black color. Set Pivot to top-left (left arrow + up arrow). Change X Size to 434. The height adjusts to the text amount. Set the Transform position to {33, -22, 0}. Ok, good. We now have a label that can display our tooltip's text. This label's height will adjust automatically as the text gets longer or shorter. Let's configure anchors to make sure the background always wraps around the label: Select our Tooltip | Background GameObject. Set Anchors Type to Unified. Drag Tooltip | Label in the new Target field. Set the Execute parameter to OnUpdate. Great! Now, if you edit our tooltip's text label to a very large text, you'll see that it adjusts automatically, as shown in the following screenshot:   UITooltip We can now add the UITooltip component to our tooltip object: Select our UI Root | Tooltip GameObject. Click the Add Component button in the Inspector view. Type tooltip with your keyboard to search for components. Select Tooltip and hit Enter or click on it with your mouse. Configure the newly attached UITooltip component, as follows: Drag UI Root | Tooltip | Label in the Text field. Drag UI Root | Tooltip | Background in the Background field. The tooltip object is ready! It is now defined as a tooltip for NGUI. Now, let's see how we can display it when needed using a few simple lines of code. Displaying the tooltip We must now show the tooltip when needed. In order to do that, we can use the OnTooltip() event, in which we request to display the tooltip with localized text: Select our three Main | Buttons | Exit, Options, and Play buttons. Click the Add Component button in the Inspector view. Type ShowTooltip with your keyboard. Hit Enter twice to create and attach the new ShowTooltip.cs script to it. Open this new ShowTooltip.cs script. First, we need to add this public key variable to define which text we want to display: // The localization key of the text to display public string key = ""; Ok, now add the following OnTooltip() method that retrieves the localized text and requests to show or hide the tooltip depending on the state bool: // When the OnTooltip event is triggered on this object void OnTooltip(bool state) { // Get the final localized text string finalText = Localization.Get(key);   // If the tooltip must be removed... if(!state) { // ...Set the finalText to nothing finalText = ""; }   // Request the tooltip display UITooltip.ShowText(finalText); } Save the script. As you can see in the preceding code, the Localization.Get(string key) method returns localized text of the corresponding key parameter that is passed. You can now use it to localize a label through code anytime! In order to hide the tooltip, we simply request UITooltip to show an empty tooltip. To use Localization.Get(string key), your label must not have a UILocalize component attached to it; otherwise, the value of UILocalize will overwrite anything you assign to UILabel. Ok, we have added the code to show our tooltip with localized text. Now, open the Localization.txt file, and add these localized strings: // Tooltips Play_Tooltip, "Launch the game!", "Lancer le jeu !" Options_Tooltip, "Change language, nickname, subtitles...", "Changer la langue, le pseudo, les sous-titres..." Exit_Tooltip, "Leaving us already?", "Vous nous quittez déjà ?" Now that our localized strings are added, we could manually configure the key parameter for our three buttons' Show Tooltip components to respectively display Play_Tooltip, Options_Tooltip, and Exit_Tooltip. But that would be a repetitive action, and if we want to add localized tooltips easily for future and existing objects, we should implement the following system: if the key parameter is empty, we'll try to get a localized text based on the GameObject's name. Let's do this now! Open our ShowTooltip.cs script, and add this Start() method: // At start void Start() { // If key parameter isn't defined in inspector... if(string.IsNullOrEmpty(key)) { // ...Set it now based on the GameObject's name key = name + "_Tooltip"; } } Click on Unity's play button. That's it! When you leave your cursor on any of our three buttons, a localized tooltip appears:   The preceding tooltip wraps around the displayed text perfectly, and we didn't have to manually configure their Show Tooltip components' key parameters! Actually, I have a feeling that the display delay is too long. Let's correct this: Select our UI Root | Camera GameObject. Set Tooltip Delay of UICamera to 0.3. That's better—our localized tooltip appears after 0.3 seconds of hovering. Adding the remaining tooltips We can now easily add tooltips for our Options page's element. The tooltip works on any GameObject with a collider attached to it. Let's use a search by type to find them: In the Hierarchy view's search bar, type t:boxcollider Select Checkbox, Confirm, Input, List (both), Music, and SFX: Click on the Add Component button in the Inspector view. Type show with your keyboard to search the components. Hit Enter or click on the Show Tooltip component to attach it to them. For the objects with generic names, such as Input and List, we need to set their key parameter manually, as follows: Select the Checkbox GameObject, and set Key to Sound_Tooltip. Select the Input GameObject, and set Key to Nickname_Tooltip. For the List for language selection, set Key to Language_Tooltip. For the List for subtitles selection, set Key to Subtitles_Tooltip. To know if the selected list is the language or subtitles list, look at Options of its UIPopup List: if it has the None option, then it's the subtitles selection. Finally, we need to add these localization strings in the Localization.txt file: Sound_Tooltip, "Enable or disable game sound", "Activer ou désactiver le son du jeu" Nickname_Tooltip, "Name used during the game", "Pseudo utilisé lors du jeu" Language_Tooltip, "Game and user interface language", "Langue du jeu et de l'interface" Subtitles_Tooltip, "Subtitles language", "Langue des sous-titres" Confirm_Tooltip, "Confirm and return to main menu", "Confirmer et retourner au menu principal" Music_Tooltip, "Game music volume", "Volume de la musique" SFX_Tooltip, "Sound effects volume", "Volume des effets" Hit Unity's play button. We now have localized tooltips for all our options! We now know how to easily use NGUI's tooltip system. It's time to talk about Tween methods. Tweens The tweens we have used until now were components we added to GameObjects in the scene. It is also possible to easily add tweens to GameObjects through code. You can see all available tweens by simply typing Tween inside any method in your favorite IDE. You will see a list of Tween classes thanks to auto-completion, as shown in the following screenshot:   The strong point of these classes is that they work in one line and don't have to be executed at each frame; you just have to call their Begin() method! Here, we will apply tweens on widgets, but keep in mind that it works in the exact same way with other GameObjects since NGUI widgets are GameObjects. Tween Scale Previously, we've used the Tween Scale component to make our main window disappear when the Exit button is pressed. Let's do the same when the Play button is pressed, but this time we'll do it through code to understand how it's done. DisappearOnClick Script We will first create a new DisappearOnClick.cs script that will tween a target's scale to {0.01, 0.01, 0.01} when the GameObject it's attached to is clicked on: Select our UI Root | Main | Buttons | Play GameObject. Click the Add Component button in the Inspector view. Type DisappearOnClick with your keyboard. Hit Enter twice to create and add the new DisappearOnClick.cs script. Open this new DisappearOnClick.cs script. First, we must add this public target GameObject to define which object will be affected by the tween, and a duration float to define the speed: // Declare the target we'll tween down to {0.01, 0.01, 0.01} public GameObject target; // Declare a float to configure the tween's duration public float duration = 0.3f; Ok, now, let's add the following OnClick() method, which creates a new tween towards {0.01, 0.01, 0.01} on our desired target using the duration variable: // When this object is clicked private void OnClick() { // Create a tween on the target TweenScale.Begin(target, duration, Vector3.one * 0.01f); } In the preceding code, we scale down the target for the desired duration, towards 0.01f. Save the script. Good. Now, we simply have to assign our variables in the Inspector view: Go back to Unity and select our Play button GameObject. Drag our UI Root | Main object in the DisappearOnClick Target field. Great. Now, hit Unity's play button. When you click the menu's Play button, our main menu is scaled down to {0.01, 0.01, 0.01}, with the simple TweenScale.Begin() line! Now that we've seen how to make a basic tween, let's see how to add effects. Tween effects Right now, our tween is simple and linear. In order to add an effect to the tween, we first need to store it as UITweener, which is its parent class. Replace lines of our OnClick() method by these to first store it and set an effect: // Retrieve the new target's tween UITweener tween = TweenScale.Begin(target, duration, Vector3.one * 0.01f); // Set the new tween's effect method tween.method = UITweener.Method.EaseInOut; That's it. Our tween now has an EaseInOut effect. You also have the following tween effect methods:   Perform the following steps: BounceIn: Bouncing effect at the start of tween BounceOut: Bouncing effect at the end of tween EaseIn: Smooth acceleration effect at the start of tween EaseInOut: Smooth acceleration and deceleration EaseOut: Smooth deceleration effect at the end of tween Linear: Simple linear tween without any effects Great. We now know how to add tween effects through code. Now, let's see how we can set event delegates through code. You can set the tween's ignoreTimeScale to true if you want it to always run at normal speed even if your Time.timeScale variable is different from 1. Event delegates Many NGUI components broadcast events, for which you can set an event delegate—also known as a callback method—executed when the event is triggered. We did it through the Inspector view by assigning the Notify and Method fields when buttons were clicked. For any type of tween, you can set a specific event delegate for when the tween is finished. We'll see how to do this through code. Before we continue, we must create our callback first. Let's create a callback that loads a new scene. The callback Open our MenuManager.cs script, and add this static LoadGameScene() callback method: public static void LoadGameScene() { //Load the Game scene now Application.LoadLevel("Game"); } Save the script. The preceding code requests to load the Game scene. To ensure Unity finds our scenes at runtime, we'll need to create the Game scene and add both Menu and Game scenes to the build settings: Navigate to File | Build Settings. Click on the Add Current button (don't close the window now). In Unity, navigate to File | New Scene. Navigate to File | Save Scene as… Save the scene as Game.unity. Click on the Add Current button of the Build Settings window and close it. Navigate to File | Open Scene and re-open our Menu.unity scene. Ok, now that both scenes have been added to the build settings, we are ready to link our callback to our event. Linking a callback to an event Now that our LoadGameScene() callback method is written, we must link it to our event. We have two solutions. First, we'll see how to assign it using code exclusively, and then we'll create a more flexible system using NGUI's Notify and Method fields. Code In order to set a callback for a specific event, a generic solution exists for all NGUI events you might encounter: the EventDelegate.Set() method. You can also add multiple callbacks to an event using EventDelegate.Add(). Add this line at the end of the OnClick() method of DisappearOnClick.cs: // Set the tween's onFinished event to our LoadGameScene callback EventDelegate.Set(tween.onFinished, MenuManager.LoadGameScene); Instead of the preceding line, we can also use the tween-specific SetOnFinished() convenience method to do this. We'll get the exact same result with fewer words: // Another way to assign our method to the onFinished event tween.SetOnFinished(MenuManager.LoadGameScene); Great. If you hit Unity's play button and click on our main menu's Play button, you'll see that our Game scene is loaded as soon as the tween has finished! It is possible to remove the link of an existing event delegate to a callback by calling EventDelegate.Remove(eventDelegate, callback);. Now, let's see how to link an event delegate to a callback using the Inspector view. Inspector Now that we have seen how to set event delegates through code, let's see how we can create a variable to let us choose which method to call within the Inspector view, like this:   The method to call when the target disappears can be set any time without editing the code The On Disappear variable shown in the preceding screenshot is of the type EventDelegate. We can declare it right now with the following line as a global variable for our DisappearOnClick.cs script: // Declare an event delegate variable to be set in Inspector public EventDelegate onDisappear; Now, let's change the OnClick() method's last line to make sure the tween's onFinished event calls the defined onDisappear callback: // Set the tween's onFinished event to the selected callback tween.SetOnFinished(onDisappear); Ok. Great. Save the script and go to Unity. Select our main menu's Play button: a new On Disappear field has appeared. Drag UI Root—which holds our MenuManager.cs script—in the Notify field. Now, try to select our MenuManager | LoadGameScene method. Surprisingly, it doesn't appear, and you can only select the script's Exit method… why is that? That is simply because our LoadGameScene() method is currently static. If we want it to be available in the Inspector view, we need to remove its static property: Open our MenuManager.cs script. Remove the static keyword from our LoadGameScene() method. Save the script and return to Unity. You can now select it in the drop-down list:   Great! We have set our callback through the Inspector view; the Game scene will be loaded when the menu disappears. Now that we have learned how to assign event delegates to callback methods through code and the Inspector view, let's see how to assign keyboard keys to user interface elements. Keyboard keys In this section, we'll see how to add keyboard control to our UI. First, we'll see how to bind keys to buttons, and then we'll add a navigation system using keyboard arrows. UIKey binding The UIKey Binding component assigns a specific key to the widget it's attached to. We'll use it now to assign the keyboard's Escape key to our menu's Exit button: Select our UI Root | Main | Buttons | Exit GameObject. Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Binding and hit Enter or click on it with your mouse. Let's see its available parameters. Parameters We've just added the following UIKey Binding component to our Exit button, as follows:   The newly attached UIKey Binding component has three parameters: Key Code: Which key would you like to bind to an action? Modifier: If you want a two-button combination. Select on the four available modifiers: Shift, Control, Alt or None. Action: Which action should we bind to this key? You can simulate a button click with PressAndClick, a selection with Select, or both with All. Ok, now, we'll configure it to see how it works. Configuration Simply set the Key Code field to Escape. Now, hit Unity's play button. When you hit the Escape key of our keyboard, it reacts as if the Exit button was pressed! We can now move on to see how to add keyboard and controller navigation to the UI. UIKey navigation The UIKey Navigation component helps us assign objects to select using the keyboard arrows or controller directional-pad. For most widgets, the automatic configuration is enough, but we'll need to use the override parameters in some cases to have the behavior we need. The nickname input field has neither the UIButton nor the UIButton Scale components attached to it. This means that there will be no feedback to show the user it's currently selected with the keyboard navigation, which is a problem. We can correct this right now. Select UI Root | Options | Nickname | Input, and then: Add the Button component (UIButton) to it. Add the Button Scale component (UIButton Scale) to it. Center Pivot of UISprite (middle bar + middle bar). Reset Center of Box Collider to {0, 0, 0}. The Nickname | Input GameObject should have an Inspector view like this:   Ok. We'll now add the Key Navigation component (UIKey Navigation) to most of the buttons in the scene. In order to do that, type t:uibutton in the Hierarchy view's search bar to display only GameObjects with the UIButton component attached to them:   Ok. With the preceding search filter, select the following GameObjects:   Now, with the preceding selection, follow these steps: Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Navigation and hit Enter or click on it with your mouse. We've added the UIKey Navigation component to our selection. Let's see its parameters. Parameters We've just added the following UIKey Navigation component to our objects:   The newly attached UIKey Navigation component has four parameter groups: Starts Selected: Is this widget selected by default at the start? Select on Click: Which widget should be selected when this widget is clicked on — or the Enter key/confirm button has been pressed? This option can be used to select a specific widget when a new page is displayed. Constraint: Use this to limit the navigation movement from this widget: None: The movement is free from this widget Vertical: From this widget, you can only go up or down Horizontal: From this widget, you can only move left or right Explicit: Only move to widgets specified in the Override Override: Use the Left, Right, Up, and Down fields to force the input to select the specified objects. If the Constraint parameter is set to Explicit, only widgets specified here can be selected. Otherwise, automatic configuration still works for fields left to None. Summary This article thus has given an introduction to how C# is used in Unity. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 15832

article-image-evolution-hadoop
Packt
29 Dec 2014
12 min read
Save for later

Evolution of Hadoop

Packt
29 Dec 2014
12 min read
 In this article by Sandeep Karanth, author of the book Mastering Hadoop, we will see about the Hadoop's timeline, Hadoop 2.X and Hadoop YARN. Hadoop's timeline The following figure gives a timeline view of the major releases and milestones of Apache Hadoop. The project has been there for 8 years, but the last 4 years has seen Hadoop make giant strides in big data processing. In January 2010, Google was awarded a patent for the MapReduce technology. This technology was licensed to the Apache Software Foundation 4 months later, a shot in the arm for Hadoop. With legal complications out of the way, enterprises—small, medium, and large—were ready to embrace Hadoop. Since then, Hadoop has come up with a number of major enhancements and releases. It has given rise to businesses selling Hadoop distributions, support, training, and other services. Hadoop 1.0 releases, referred to as 1.X in this book, saw the inception and evolution of Hadoop as a pure MapReduce job-processing framework. It has exceeded its expectations with a wide adoption of massive data processing. The stable 1.X release at this point of time is 1.2.1, which includes features such as append and security. Hadoop 1.X tried to stay flexible by making changes, such as HDFS append, to support online systems such as HBase. Meanwhile, big data applications evolved in range beyond MapReduce computation models. The flexibility of Hadoop 1.X releases had been stretched; it was no longer possible to widen its net to cater to the variety of applications without architectural changes. Hadoop 2.0 releases, referred to as 2.X in this book, came into existence in 2013. This release family has major changes to widen the range of applications Hadoop can solve. These releases can even increase efficiencies and mileage derived from existing Hadoop clusters in enterprises. Clearly, Hadoop is moving fast beyond MapReduce to stay as the leader in massive scale data processing with the challenge of being backward compatible. It is becoming a generic cluster-computing and storage platform from being only a MapReduce-specific framework. Hadoop 2.X The extensive success of Hadoop 1.X in organizations also led to the understanding of its limitations, which are as follows: Hadoop gives unprecedented access to cluster computational resources to every individual in an organization. The MapReduce programming model is simple and supports a develop once deploy at any scale paradigm. This leads to users exploiting Hadoop for data processing jobs where MapReduce is not a good fit, for example, web servers being deployed in long-running map jobs. MapReduce is not known to be affable for iterative algorithms. Hacks were developed to make Hadoop run iterative algorithms. These hacks posed severe challenges to cluster resource utilization and capacity planning. Hadoop 1.X has a centralized job flow control. Centralized systems are hard to scale as they are the single point of load lifting. JobTracker failure means that all the jobs in the system have to be restarted, exerting extreme pressure on a centralized component. Integration of Hadoop with other kinds of clusters is difficult with this model. The early releases in Hadoop 1.X had a single NameNode that stored all the metadata about the HDFS directories and files. The data on the entire cluster hinged on this single point of failure. Subsequent releases had a cold standby in the form of a secondary NameNode. The secondary NameNode merged the edit logs and NameNode image files, periodically bringing in two benefits. One, the primary NameNode startup time was reduced as the NameNode did not have to do the entire merge on startup. Two, the secondary NameNode acted as a replica that could minimize data loss on NameNode disasters. However, the secondary NameNode (secondary NameNode is not a backup node for NameNode) was still not a hot standby, leading to high failover and recovery times and affecting cluster availability. Hadoop 1.X is mainly a Unix-based massive data processing framework. Native support on machines running Microsoft Windows Server is not possible. With Microsoft entering cloud computing and big data analytics in a big way, coupled with existing heavy Windows Server investments in the industry, it's very important for Hadoop to enter the Microsoft Windows landscape as well. Hadoop's success comes mainly from enterprise play. Adoption of Hadoop mainly comes from the availability of enterprise features. Though Hadoop 1.X tries to support some of them, such as security, there is a list of other features that are badly needed by the enterprise. Yet Another Resource Negotiator (YARN) In Hadoop 1.X, resource allocation and job execution were the responsibilities of JobTracker. Since the computing model was closely tied to the resources in the cluster, MapReduce was the only supported model. This tight coupling led to developers force-fitting other paradigms, leading to unintended use of MapReduce. The primary goal of YARN is to separate concerns relating to resource management and application execution. By separating these functions, other application paradigms can be added onboard a Hadoop computing cluster. Improvements in interoperability and support for diverse applications lead to efficient and effective utilization of resources. It integrates well with the existing infrastructure in an enterprise. Achieving loose coupling between resource management and job management should not be at the cost of loss in backward compatibility. For almost 6 years, Hadoop has been the leading software to crunch massive datasets in a parallel and distributed fashion. This means huge investments in development; testing and deployment were already in place. YARN maintains backward compatibility with Hadoop 1.X (hadoop-0.20.205+) APIs. An older MapReduce program can continue execution in YARN with no code changes. However, recompiling the older code is mandatory. Architecture overview The following figure lays out the architecture of YARN. YARN abstracts out resource management functions to a platform layer called ResourceManager (RM). There is a per-cluster RM that primarily keeps track of cluster resource usage and activity. It is also responsible for allocation of resources and resolving contentions among resource seekers in the cluster. RM uses a generalized resource model and is agnostic to application-specific resource needs. For example, RM need not know the resources corresponding to a single Map or Reduce slot. Planning and executing a single job is the responsibility of ApplicationMaster (AM). There is an AM instance per running application. For example, there is an AM for each MapReduce job. It has to request for resources from the RM, use them to execute the job, and work around failures, if any. The general cluster layout has RM running as a daemon on a dedicated machine with a global view of the cluster and its resources. Being a global entity, RM can ensure fairness depending on the resource utilization of the cluster resources. When requested for resources, RM allocates them dynamically as a node-specific bundle called a container. For example, 2 CPUs and 4 GB of RAM on a particular node can be specified as a container. Every node in the cluster runs a daemon called NodeManager (NM). RM uses NM as its node local assistant. NMs are used for container management functions, such as starting and releasing containers, tracking local resource usage, and fault reporting. NMs send heartbeats to RM. The RM view of the system is the aggregate of the views reported by each NM. Jobs are submitted directly to RMs. Based on resource availability, jobs are scheduled to run by RMs. The metadata of the jobs are stored in persistent storage to recover from RM crashes. When a job is scheduled, RM allocates a container for the AM of the job on a node in the cluster. AM then takes over orchestrating the specifics of the job. These specifics include requesting resources, managing task execution, optimizations, and handling tasks or job failures. AM can be written in any language, and different versions of AM can execute independently on a cluster. An AM resource request contains specifications about the locality and the kind of resource expected by it. RM puts in its best effort to satisfy AM's needs based on policies and availability of resources. When a container is available for use by AM, it can launch application-specific code in this container. The container is free to communicate with its AM. RM is agnostic to this communication. Storage layer enhancements A number of storage layer enhancements were undertaken in the Hadoop 2.X releases. The number one goal of the enhancements was to make Hadoop enterprise ready. High availability NameNode is a directory service for Hadoop and contains metadata pertaining to the files within cluster storage. Hadoop 1.X had a secondary Namenode, a cold standby that needed minutes to come up. Hadoop 2.X provides features to have a hot standby of NameNode. On the failure of an active NameNode, the standby can become the active Namenode in a matter of minutes. There is no data loss or loss of NameNode service availability. With hot standbys, automated failover becomes easier too. The key to keep the standby in a hot state is to keep its data as current as possible with respect to the active Namenode. This is achieved by reading the edit logs of the active NameNode and applying it onto itself with very low latency. The sharing of edit logs can be done using the following two methods: A shared NFS storage directory between the active and standby NameNodes: the active writes the logs to the shared location. The standby monitors the shared directory and pulls in the changes. A quorum of Journal Nodes: the active NameNode presents its edits to a subset of journal daemons that record this information. The standby node constantly monitors these journal daemons for updates and syncs the state with itself. The following figure shows the high availability architecture using a quorum of Journal Nodes. The data nodes themselves send block reports directly to both the active and standby NameNodes: Zookeeper or any other High Availability monitoring service can be used to track NameNode failures. With the assistance of Zookeeper, failover procedures to promote the hot standby as the active NameNode can be triggered. HDFS Federation Similar to what YARN did to Hadoop's computation layer, a more generalized storage model has been implemented in Hadoop 2.X. The block storage layer has been generalized and separated out from the filesystem layer. This separation has given an opening for other storage services to be integrated into a Hadoop cluster. Previously, HDFS and the block storage layer were tightly coupled. One use case that has come forth from this generalized storage model is HDFS Federation. Federation allows multiple HDFS namespaces to use the same underlying storage. Federated NameNodes provide isolation at the filesystem level. HDFS snapshots Snapshots are point-in-time, read-only images of the entire or a particular subset of a filesystem. Snapshots are taken for three general reasons: Protection against user errors Backup Disaster recovery Snapshotting is implemented only on NameNode. It does not involve copying data from the data nodes. It is a persistent copy of the block list and file size. The process of taking a snapshot is almost instantaneous and does not affect the performance of NameNode. Other enhancements There are a number of other enhancements in Hadoop 2.X, which are as follows: The wire protocol for RPCs within Hadoop is now based on Protocol Buffers. Previously, Java serialization via Writables was used. This improvement not only eases maintaining backward compatibility, but also aids in rolling the upgrades of different cluster components. RPCs allow for client-side retries as well. HDFS in Hadoop 1.X was agnostic about the type of storage being used. Mechanical or SSD drives were treated uniformly. The user did not have any control on data placement. Hadoop 2.X releases in 2014 are aware of the type of storage and expose this information to applications as well. Applications can use this to optimize their data fetch and placement strategies. HDFS append support has been brought into Hadoop 2.X. HDFS access in Hadoop 1.X releases has been through HDFS clients. In Hadoop 2.X, support for NFSv3 has been brought into the NFS gateway component. Clients can now mount HDFS onto their compatible local filesystem, allowing them to download and upload files directly to and from HDFS. Appends to files are allowed, but random writes are not. A number of I/O improvements have been brought into Hadoop. For example, in Hadoop 1.X, clients collocated with data nodes had to read data via TCP sockets. However, with short-circuit local reads, clients can directly read off the data nodes. This particular interface also supports zero-copy reads. The CRC checksum that is calculated for reads and writes of data has been optimized using the Intel SSE4.2 CRC32 instruction. Support enhancements Hadoop is also widening its application net by supporting other platforms and frameworks. One dimension we saw was onboarding of other computational models with YARN or other storage systems with the Block Storage layer. The other enhancements are as follows: Hadoop 2.X supports Microsoft Windows natively. This translates to a huge opportunity to penetrate the Microsoft Windows server land for massive data processing. This was partially possible because of the use of the highly portable Java programming language for Hadoop development. The other critical enhancement was the generalization of compute and storage management to include Microsoft Windows. As part of Platform-as-a-Service offerings, cloud vendors give out on-demand Hadoop as a service. OpenStack support in Hadoop 2.X makes it conducive for deployment in elastic and virtualized cloud environments. Summary In this article, we saw the evolution of Hadoop and some of its milestones and releases. We went into depth on Hadoop 2.X and the changes it brings into Hadoop. The key takeaways from this article are: In over 6 years of its existence, Hadoop has become the number one choice as a framework for massively parallel and distributed computing. The community has been shaping Hadoop to gear up for enterprise use. In 1.X releases, HDFS append and security, were the key features that made Hadoop enterprise-friendly. Hadoop's storage layer was enhanced in 2.X to separate the filesystem from the block storage service. This enables features such as supporting multiple namespaces and integration with other filesystems. 2.X shows improvements in Hadoop storage availability and snapshotting. Resources for Article: Further resources on this subject: Securing the Hadoop Ecosystem [article] Sizing and Configuring your Hadoop Cluster [article] HDFS and MapReduce [article]
Read more
  • 0
  • 0
  • 2679

article-image-creating-map
Packt
29 Dec 2014
11 min read
Save for later

Creating a Map

Packt
29 Dec 2014
11 min read
In this article by Thomas Newton and Oscar Villarreal, authors of the book Learning D3.js Mapping, we will cover the following topics through a series of experiments: Foundation – creating your basic map Experiment 1 – adjusting the bounding box Experiment 2 – creating choropleths Experiment 3 – adding click events to our visualization (For more resources related to this topic, see here.) Foundation – creating your basic map In this section, we will walk through the basics of creating a standard map. Let's walk through the code to get a step-by-step explanation of how to create this map. The width and height can be anything you want. Depending on where your map will be visualized (cellphones, tablets, or desktops), you might want to consider providing a different width and height: var height = 600; var width = 900; The next variable defines a projection algorithm that allows you to go from a cartographic space (latitude and longitude) to a Cartesian space (x,y)—basically a mapping of latitude and longitude to coordinates. You can think of a projection as a way to map the three-dimensional globe to a flat plane. There are many kinds of projections, but geo.mercator is normally the default value you will use: var projection = d3.geo.mercator(); var mexico = void 0; If you were making a map of the USA, you could use a better projection called albersUsa. This is to better position Alaska and Hawaii. By creating a geo.mercator projection, Alaska would render proportionate to its size, rivaling that of the entire US. The albersUsa projection grabs Alaska, makes it smaller, and puts it at the bottom of the visualization. The following screenshot is of geo.mercator:   This following screenshot is of geo.albersUsa:   The D3 library currently contains nine built-in projection algorithms. An overview of each one can be viewed at https://github.com/mbostock/d3/wiki/Geo-Projections. Next, we will assign the projection to our geo.path function. This is a special D3 function that will map the JSON-formatted geographic data into SVG paths. The data format that the geo.path function requires is named GeoJSON: var path = d3.geo.path().projection(projection); var svg = d3.select("#map")    .append("svg")    .attr("width", width)    .attr("height", height); Including the dataset The necessary data has been provided for you within the data folder with the filename geo-data.json: d3.json('geo-data.json', function(data) { console.log('mexico', data); We get the data from an AJAX call. After the data has been collected, we want to draw only those parts of the data that we are interested in. In addition, we want to automatically scale the map to fit the defined height and width of our visualization. If you look at the console, you'll see that "mexico" has an objects property. Nested inside the objects property is MEX_adm1. This stands for the administrative areas of Mexico. It is important to understand the geographic data you are using, because other data sources might have different names for the administrative areas property:   Notice that the MEX_adm1 property contains a geometries array with 32 elements. Each of these elements represents a state in Mexico. Use this data to draw the D3 visualization. var states = topojson.feature(data, data.objects.MEX_adm1); Here, we pass all of the administrative areas to the topojson.feature function in order to extract and create an array of GeoJSON objects. The preceding states variable now contains the features property. This features array is a list of 32 GeoJSON elements, each representing the geographic boundaries of a state in Mexico. We will set an initial scale and translation to 1 and 0,0 respectively: // Setup the scale and translate projection.scale(1).translate([0, 0]); This algorithm is quite useful. The bounding box is a spherical box that returns a two-dimensional array of min/max coordinates, inclusive of the geographic data passed: var b = path.bounds(states); To quote the D3 documentation: "The bounding box is represented by a two-dimensional array: [[left, bottom], [right, top]], where left is the minimum longitude, bottom is the minimum latitude, right is maximum longitude, and top is the maximum latitude." This is very helpful if you want to programmatically set the scale and translation of the map. In this case, we want the entire country to fit in our height and width, so we determine the bounding box of every state in the country of Mexico. The scale is calculated by taking the longest geographic edge of our bounding box and dividing it by the number of pixels of this edge in the visualization: var s = .95 / Math.max((b[1][0] - b[0][0]) / width, (b[1][1] - b[0][1]) / height); This can be calculated by first computing the scale of the width, then the scale of the height, and, finally, taking the larger of the two. All of the logic is compressed into the single line given earlier. The three steps are explained in the following image:   The value 95 adjusts the scale, because we are giving the map a bit of a breather on the edges in order to not have the paths intersect the edges of the SVG container item, basically reducing the scale by 5 percent. Now, we have an accurate scale of our map, given our set width and height. var t = [(width - s * (b[1][0] + b[0][0])) / 2, (height - s * (b[1][1] + b[0][1])) / 2]; When we scale in SVG, it scales all the attributes (even x and y). In order to return the map to the center of the screen, we will use the translate function. The translate function receives an array with two parameters: the amount to translate in x, and the amount to translate in y. We will calculate x by finding the center (topRight – topLeft)/2 and multiplying it by the scale. The result is then subtracted from the width of the SVG element. Our y translation is calculated similarly but using the bottomRight – bottomLeft values divided by 2, multiplied by the scale, then subtracted from the height. Finally, we will reset the projection to use our new scale and translation: projection.scale(s).translate(t); Here, we will create a map variable that will group all of the following SVG elements into a <g> SVG tag. This will allow us to apply styles and better contain all of the proceeding paths' elements: var map = svg.append('g').attr('class', 'boundary'); Finally, we are back to the classic D3 enter, update, and exit pattern. We have our data, the list of Mexico states, and we will join this data to the path SVG element:    mexico = map.selectAll('path').data(states.features);      //Enter    mexico.enter()        .append('path')        .attr('d', path); The enter section and the corresponding path functions are executed on every data element in the array. As a refresher, each element in the array represents a state in Mexico. The path function has been set up to correctly draw the outline of each state as well as scale and translate it to fit in our SVG container. Congratulations! You have created your first map! Experiment 1 – adjusting the bounding box Now that we have our foundation, let's start with our first experiment. For this experiment, we will manually zoom in to a state of Mexico using what we learned in the previous section. For this experiment, we will modify one line of code: var b = path.bounds(states.features[5]); Here, we are telling the calculation to create a boundary based on the sixth element of the features array instead of every state in the country of Mexico. The boundaries data will now run through the rest of the scaling and translation algorithms to adjust the map to the one shown in the following screenshot:   We have basically reduced the min/max of the boundary box to include the geographic coordinates for one state in Mexico (see the next screenshot), and D3 has scaled and translated this information for us automatically:   This can be very useful in situations where you might not have the data that you need in isolation from the surrounding areas. Hence, you can always zoom in to your geography of interest and isolate it from the rest. Experiment 2 – creating choropleths One of the most common uses of D3.js maps is to make choropleths. This visualization gives you the ability to discern between regions, giving them a different color. Normally, this color is associated with some other value, for instance, levels of influenza or a company's sales. Choropleths are very easy to make in D3.js. In this experiment, we will create a quick choropleth based on the index value of the state in the array of all the states. We will only need to modify two lines of code in the update section of our D3 code. Right after the enter section, add the following two lines: //Update var color = d3.scale.linear().domain([0,33]).range(['red',   'yellow']); mexico.attr('fill', function(d,i) {return color(i)}); The color variable uses another valuable D3 function named scale. Scales are extremely powerful when creating visualizations in D3; much more detail on scales can be found at https://github.com/mbostock/d3/wiki/Scales. For now, let's describe what this scale defines. Here, we created a new function called color. This color function looks for any number between 0 and 33 in an input domain. D3 linearly maps these input values to a color between red and yellow in the output range. D3 has included the capability to automatically map colors in a linear range to a gradient. This means that executing the new function, color, with 0 will return the color red, color(15) will return an orange color, and color(33) will return yellow. Now, in the update section, we will set the fill property of the path to the new color function. This will provide a linear scale of colors and use the index value i to determine what color should be returned. If the color was determined by a different value of the datum, for instance, d.sales, then you would have a choropleth where the colors actually represent sales. The preceding code should render something as follows: Experiment 3 – adding click events to our visualization We've seen how to make a map and set different colors to the different regions of this map. Next, we will add a little bit of interactivity. This will illustrate a simple reference to bind click events to maps. First, we need a quick reference to each state in the country. To accomplish this, we will create a new function called geoID right below the mexico variable: var height = 600; var width = 900; var projection = d3.geo.mercator(); var mexico = void 0;   var geoID = function(d) {    return "c" + d.properties.ID_1; }; This function takes in a state data element and generates a new selectable ID based on the ID_1 property found in the data. The ID_1 property contains a unique numeric value for every state in the array. If we insert this as an id attribute into the DOM, then we would create a quick and easy way to select each state in the country. The following is the geoID function, creating another function called click: var click = function(d) {    mexico.attr('fill-opacity', 0.2); // Another update!    d3.select('#' + geoID(d)).attr('fill-opacity', 1); }; This method makes it easy to separate what the click is doing. The click method receives the datum and changes the fill opacity value of all the states to 0.2. This is done so that when you click on one state and then on the other, the previous state does not maintain the clicked style. Notice that the function call is iterating through all the elements of the DOM, using the D3 update pattern. After making all the states transparent, we will set a fill-opacity of 1 for the given clicked item. This removes all the transparent styling from the selected state. Notice that we are reusing the geoID function that we created earlier to quickly find the state element in the DOM. Next, let's update the enter method to bind our new click method to every new DOM element that enter appends: //Enter mexico.enter()      .append('path')      .attr('d', path)      .attr('id', geoID)      .on("click", click); We also added an attribute called id; this inserts the results of the geoID function into the id attribute. Again, this makes it very easy to find the clicked state. The code should produce a map as follows. Check it out and make sure that you click on any of the states. You will see its color turn a little brighter than the surrounding states. Summary You learned how to build many different kinds of maps that cover different kinds of needs. Choropleths and data visualizations on maps are some of the most common geographic-based data representations that you will come across. Resources for Article: Further resources on this subject: Using Canvas and D3 [article] Interacting with your Visualization [article] Simple graphs with d3.js [article]
Read more
  • 0
  • 0
  • 2100

article-image-heads-mvvmcross
Packt
29 Dec 2014
33 min read
Save for later

Heads up to MvvmCross

Packt
29 Dec 2014
33 min read
In this article, by Mark Reynolds, author of the book Xamarin Essentials, we will take the next step and look at how the use of design patterns and frameworks can increase the amount of code that can be reused. We will cover the following topics: An introduction to MvvmCross The MVVM design pattern Core concepts Views, ViewModels, and commands Data binding Navigation (ViewModel to ViewModel) The project organization The startup process Creating NationalParks.MvvmCross Our approach will be to introduce the core concepts at a high level and then dive in and create the national parks sample app using MvvmCross. This will give you a basic understanding of how to use the framework and the value associated with its use. With that in mind, let's get started. (For more resources related to this topic, see here.) Introducing MvvmCross MvvmCross is an open source framework that was created by Stuart Lodge. It is based on the Model-View-ViewModel (MVVM) design pattern and is designed to enhance code reuse across numerous platforms, including Xamarin.Android, Xamarin.iOS, Windows Phone, Windows Store, WPF, and Mac OS X. The MvvmCross project is hosted on GitHub and can be accessed at https://github.com/MvvmCross/MvvmCross. The MVVM pattern MVVM is a variation of the Model-View-Controller pattern. It separates logic traditionally placed in a View object into two distinct objects, one called View and the other called ViewModel. The View is responsible for providing the user interface and the ViewModel is responsible for the presentation logic. The presentation logic includes transforming data from the Model into a form that is suitable for the user interface to work with and mapping user interaction with the View into requests sent back to the Model. The following diagram depicts how the various objects in MVVM communicate: While MVVM presents a more complex implementation model, there are significant benefits of it, which are as follows: ViewModels and their interactions with Models can generally be tested using frameworks (such as NUnit) that are much easier than applications that combine the user interface and presentation layers ViewModels can generally be reused across different user interface technologies and platforms These factors make the MVVM approach both flexible and powerful. Views Views in an MvvmCross app are implemented using platform-specific constructs. For iOS apps, Views are generally implemented as ViewControllers and XIB files. MvvmCross provides a set of base classes, such as MvxViewContoller, that iOS ViewControllers inherit from. Storyboards can also be used in conjunction with a custom presenter to create Views; we will briefly discuss this option in the section titled Implementing the iOS user interface later in this article. For Android apps, Views are generally implemented as MvxActivity or MvxFragment along with their associated layout files. ViewModels ViewModels are classes that provide data and presentation logic to views in an app. Data is exposed to a View as properties on a ViewModel, and logic that can be invoked from a View is exposed as commands. ViewModels inherit from the MvxViewModel base class. Commands Commands are used in ViewModels to expose logic that can be invoked from the View in response to user interactions. The command architecture is based on the ICommand interface used in a number of Microsoft frameworks such as Windows Presentation Foundation (WPF) and Silverlight. MvvmCross provides IMvxCommand, which is an extension of ICommand, along with an implementation named MvxCommand. The commands are generally defined as properties on a ViewModel. For example: public IMvxCommand ParkSelected { get; protected set; } Each command has an action method defined, which implements the logic to be invoked: protected void ParkSelectedExec(NationalPark park) {    . . .// logic goes here } The commands must be initialized and the corresponding action method should be assigned: ParkSelected =    new MvxCommand<NationalPark> (ParkSelectedExec); Data binding Data binding facilitates communication between the View and the ViewModel by establishing a two-way link that allows data to be exchanged. The data binding capabilities provided by MvvmCross are based on capabilities found in a number of Microsoft XAML-based UI frameworks such as WPF and Silverlight. The basic idea is that you would like to bind a property in a UI control, such as the Text property of an EditText control in an Android app to a property of a data object such as the Description property of NationalPark. The following diagram depicts this scenario: The binding modes There are four different binding modes that can be used for data binding: OneWay binding: This mode tells the data binding framework to transfer values from the ViewModel to the View and transfer any updates to properties on the ViewModel to their bound View property. OneWayToSource binding: This mode tells the data binding framework to transfer values from the View to the ViewModel and transfer updates to View properties to their bound ViewModel property. TwoWay binding: This mode tells the data binding framework to transfer values in both directions between the ViewModel and View, and updates on either object will cause the other to be updated. This binding mode is useful when values are being edited. OneTime binding: This mode tells the data binding framework to transfer values from ViewModel to View when the binding is established; in this mode, updates to ViewModel properties are not monitored by the View. The INotifyPropertyChanged interface The INotifyPropertyChanged interface is an integral part of making data binding work effectively; it acts as a contract between the source object and the target object. As the name implies, it defines a contract that allows the source object to notify the target object when data has changed, thus allowing the target to take any necessary actions such as refreshing its display. The interface consists of a single event—the PropertyChanged event—that the target object can subscribe to and that is triggered by the source if a property changes. The following sample demonstrates how to implement INotifyPropertyChanged: public class NationalPark : INotifyPropertyChanged {   public event PropertyChangedEventHandler      PropertyChanged; // rather than use "… code" it is safer to use // the comment form string _name; public string Name {    get { return _name; }    set    {        if (value.Equals (_name,            StringComparison.Ordinal))        {      // Nothing to do - the value hasn't changed;      return;        }        _name = value;        OnPropertyChanged();    } } . . . void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {      var handler = PropertyChanged; if (handler != null) {      handler(this,            new PropertyChangedEventArgs(propertyName)); } } } Binding specifications Bindings can be specified in a couple of ways. For Android apps, bindings can be specified in layout files. The following example demonstrates how to bind the Text property of a TextView instance to the Description property in a NationalPark instance: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/descrTextView"    local_MvxBind="Text Park.Description" /> For iOS, binding must be accomplished using the binding API. CreateBinding() is a method than can be found on MvxViewController. The following example demonstrates how to bind the Description property to a UILabel instance: this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).    Apply (); Navigating between ViewModels Navigating between various screens within an app is an important capability. Within a MvvmCross app, this is implemented at the ViewModel level so that navigation logic can be reused. MvvmCross supports navigation between ViewModels through use of the ShowViewModel<T>() method inherited from MvxNavigatingObject, which is the base class for MvxViewModel. The following example demonstrates how to navigate to DetailViewModel: ShowViewModel<DetailViewModel>(); Passing parameters In many situations, there is a need to pass information to the destination ViewModel. MvvmCross provides a number of ways to accomplish this. The primary method is to create a class that contains simple public properties and passes an instance of the class into ShowViewModel<T>(). The following example demonstrates how to define and use a parameters class during navigation: public class DetailParams {    public int ParkId { get; set; } }   // using the parameters class ShowViewModel<DetailViewModel>( new DetailViewParam() { ParkId = 0 }); To receive and use parameters, the destination ViewModel implements an Init() method that accepts an instance of the parameters class: public class DetailViewModel : MvxViewModel {    . . .    public void Init(DetailViewParams parameters)    {        // use the parameters here . . .    } } Solution/project organization Each MvvmCross solution will have a single core PCL project that houses the reusable code and a series of platform-specific projects that contain the various apps. The following diagram depicts the general structure: The startup process MvvmCross apps generally follow a standard startup sequence that is initiated by platform-specific code within each app. There are several classes that collaborate to accomplish the startup; some of these classes reside in the core project and some of them reside in the platform-specific projects. The following sections describe the responsibilities of each of the classes involved. App.cs The core project has an App class that inherits from MvxApplication. The App class contains an override to the Initialize() method so that at a minimum, it can register the first ViewModel that should be presented when the app starts: RegisterAppStart<ViewModels.MasterViewModel>(); Setup.cs Android and iOS projects have a Setup class that is responsible for creating the App object from the core project during the startup. This is accomplished by overriding the CreateApp() method: protected override IMvxApplication CreateApp() {    return new Core.App(); } For Android apps, Setup inherits from MvxAndroidSetup. For iOS apps, Setup inherits from MvxTouchSetup. The Android startup Android apps are kicked off using a special Activity splash screen that calls the Setup class and initiates the MvvmCross startup process. This is all done automatically for you; all you need to do is include the splash screen definition and make sure it is marked as the launch activity. The definition is as follows: [Activity( Label="NationalParks.Droid", MainLauncher = true, Icon="@drawable/icon", Theme="@style/Theme.Splash", NoHistory=true, ScreenOrientation = ScreenOrientation.Portrait)] public class SplashScreen : MvxSplashScreenActivity {    public SplashScreen():base(Resource.Layout.SplashScreen)    {    } } The iOS startup The iOS app startup is slightly less automated and is initiated from within the FinishedLaunching() method of AppDelegate: public override bool FinishedLaunching (    UIApplication app, NSDictionary options) {    _window = new UIWindow (UIScreen.MainScreen.Bounds);      var setup = new Setup(this, _window);    setup.Initialize();    var startup = Mvx.Resolve<IMvxAppStart>();    startup.Start();      _window.MakeKeyAndVisible ();      return true; } Creating NationalParks.MvvmCross Now that we have basic knowledge of the MvvmCross framework, let's put that knowledge to work and convert the NationalParks app to leverage the capabilities we just learned. Creating the MvvmCross core project We will start by creating the core project. This project will contain all the code that will be shared between the iOS and Android app primarily in the form of ViewModels. The core project will be built as a Portable Class Library. To create NationalParks.Core, perform the following steps: From the main menu, navigate to File | New Solution. From the New Solution dialog box, navigate to C# | Portable Library, enter NationalParks.Core for the project Name field, enter NationalParks.MvvmCross for the Solution field, and click on OK. Add the MvvmCross starter package to the project from NuGet. Select the NationalParks.Core project and navigate to Project | Add Packages from the main menu. Enter MvvmCross starter in the search field. Select the MvvmCross – Hot Tuna Starter Pack entry and click on Add Package. A number of things were added to NationalParks.Core as a result of adding the package, and they are as follows: A packages.config file, which contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to actual libraries in the Packages folder of the overall solution. A ViewModels folder with a sample ViewModel named FirstViewModel. An App class in App.cs, which contains an Initialize() method that starts the MvvmCross app by calling RegisterAppStart() to start FirstViewModel. We will eventually be changing this to start the MasterViewModel class, which will be associated with a View that lists national parks. Creating the MvvmCross Android app The next step is to create an Android app project in the same solution. To create NationalParks.Droid, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog box, navigate to C# | Android | Android Application, enter NationalParks.Droid for the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.Droid and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.Droid as a result of adding the package, which are as follows: packages.config: This file contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView : This class is present in the Views folder, which corresponds to FirstViewModel, which was created in NationalParks.Core. FirstView: This layout is present in Resourceslayout, which is used by the FirstView activity. This is a traditional Android layout file with the exception that it contains binding declarations in the EditView and TextView elements. Setup: This file inherits from MvxAndroidSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). SplashScreen: This class inherits from MvxSplashScreenActivity. The SplashScreen class is marked as the main launcher activity and thus initializes the MvvmCross app with a call to Setup.Initialize(). Add a reference to NationalParks.Core by selecting the References folder, right-click on it, select Edit References, select the Projects tab, check NationalParks.Core, and click on OK. Remove MainActivity.cs as it is no longer needed and will create a build error. This is because it is marked as the main launch and so is the new SplashScreen class. Also, remove the corresponding Resourceslayoutmain.axml layout file. Run the app. The app will present FirstViewModel, which is linked to the corresponding FirstView instance with an EditView class, and TextView presents the same Hello MvvmCross text. As you edit the text in the EditView class, the TextView class is automatically updated by means of data binding. The following screenshot depicts what you should see: Reusing NationalParks.PortableData and NationalParks.IO Before we start creating the Views and ViewModels for our app, we first need to bring in some code from our previous efforts that can be used to maintain parks. For this, we will simply reuse the NationalParksData singleton and the FileHandler classes that were created previously. To reuse the NationalParksData singleton and FileHandler classes, complete the following steps: Copy NationalParks.PortableData and NationalParks.IO from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the NationalParks.MvvmCross solution folder. Add a reference to NationalParks.PortableData in the NationalParks.Droid project. Create a folder named NationalParks.IO in the NationalParks.Droid project and add a link to FileHandler.cs from the NationalParks.IO project. Recall that the FileHandler class cannot be contained in the Portable Class Library because it uses file IO APIs that cannot be references from a Portable Class Library. Compile the project. The project should compile cleanly now. Implementing the INotifyPropertyChanged interface We will be using data binding to bind UI controls to the NationalPark object and thus, we need to implement the INotifyPropertyChanged interface. This ensures that changes made to properties of a park are reported to the appropriate UI controls. To implement INotifyPropertyChanged, complete the following steps: Open NationalPark.cs in the NationalParks.PortableData project. Specify that the NationalPark class implements INotifyPropertyChanged interface. Select the INotifyPropertyChanged interface, right-click on it, navigate to Refactor | Implement interface, and press Enter. Enter the following code snippet: public class NationalPark : INotifyPropertyChanged {    public event PropertyChangedEventHandler        PropertyChanged;    . . . } Add an OnPropertyChanged() method that can be called from each property setter method: void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {    var handler = PropertyChanged;    if (handler != null)    {        handler(this,            new PropertyChangedEventArgs(propertyName));    } } Update each property definition to call the setter in the same way as it is depicted for the Name property: string _name; public string Name { get { return _name; } set {    if (value.Equals (_name, StringComparison.Ordinal))    {      // Nothing to do - the value hasn't changed; return;    }    _name = value;    OnPropertyChanged(); } } Compile the project. The project should compile cleanly. We are now ready to use the NationalParksData singleton in our new project, and it supports data binding. Implementing the Android user interface Now, we are ready to create the Views and ViewModels required for our app. The app we are creating will follow the following flow: A master list view to view national parks A detail view to view details of a specific park An edit view to edit a new or previously existing park The process for creating views and ViewModels in an Android app generally consists of three different steps: Create a ViewModel in the core project with the data and event handlers (commands) required to support the View. Create an Android layout with visual elements and data binding specifications. Create an Android activity, which corresponds to the ViewModel and displays the layout. In our case, this process will be slightly different because we will reuse some of our previous work, specifically, the layout files and the menu definitions. To reuse layout files and menu definitions, perform the following steps: Copy Master.axml, Detail.axml, and Edit.axml from the Resourceslayout folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourceslayout folder in the NationalParks.Droid project, and add them to the project by selecting the layout folder and navigating to Add | Add Files. Copy MasterMenu.xml, DetailMenu.xml, and EditMenu.xml from the Resourcesmenu folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourcesmenu folder in the NationalParks.Droid project, and add them to the project by selecting the menu folder and navigating to Add | Add Files. Implementing the master list view We are now ready to implement the first of our View/ViewModel combinations, which is the master list view. Creating MasterViewModel The first step is to create a ViewModel and add a property that will provide data to the list view that displays national parks along with some initialization code. To create MasterViewModel, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, and navigate to Add | New File. In the New File dialog box, navigate to General | Empty Class, enter MasterViewModel for the Name field, and click on New. Modify the class definition so that MasterViewModel inherits from MvxViewModel; you will also need to add a few using directives: . . . using Cirrious.CrossCore.Platform; using Cirrious.MvvmCross.ViewModels; . . . namespace NationalParks.Core.ViewModels { public class MasterViewModel : MvxViewModel {          . . .    } } Add a property that is a list of NationalPark elements to MasterViewModel. This property will later be data-bound to a list view: private List<NationalPark> _parks; public List<NationalPark> Parks {    get { return _parks; }    set { _parks = value;          RaisePropertyChanged(() => Parks);    } } Override the Start() method on MasterViewModel to load the _parks collection with data from the NationalParksData singleton. You will need to add a using directive for the NationalParks.PortableData namespace again: . . . using NationalParks.PortableData; . . . public async override void Start () {    base.Start ();    await NationalParksData.Instance.Load ();    Parks = new List<NationalPark> (        NationalParksData.Instance.Parks); } We now need to modify the app startup sequence so that MasterViewModel is the first ViewModel that's started. Open App.cs in NationalParks.Core and change the call to RegisterAppStart() to reference MasterViewModel:RegisterAppStart<ViewModels.MasterViewModel>(); Updating the Master.axml layout Update Master.axml so that it can leverage the data binding capabilities provided by MvvmCross. To update Master.axml, complete the following steps: Open Master.axml and add a namespace definition to the top of the XML to include the NationalParks.Droid namespace: This namespace definition is required in order to allow Android to resolve the MvvmCross-specific elements that will be specified. Change the ListView element to a Mvx.MvxListView element: <Mvx.MvxListView    android_layout_width="match_parent"    android_layout_height="match_parent"    android_id="@+id/parksListView" /> Add a data binding specification to the MvxListView element, binding the ItemsSource property of the list view to the Parks property of MasterViewModel, as follows:    . . .    android_id="@+id/parksListView"    local_MvxBind="ItemsSource Parks" /> Add a list item template attribute to the element definition. This layout controls the content of each item that will be displayed in the list view: local:MvxItemTemplate="@layout/nationalparkitem" Create the NationalParkItem layout and provide TextView elements to display both the name and description of a park, as follows: <LinearLayout    android_orientation="vertical"    android_layout_width="fill_parent"    android_layout_height="wrap_content">    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"         android:textSize="40sp"/>    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"        android_textSize="20sp"/> </LinearLayout> Add data binding specifications to each of the TextView elements: . . .        local_MvxBind="Text Name" /> . . .        local_MvxBind="Text Description" /> . . . Note that in this case, the context for data binding is an instance of an item in the collection that was bound to MvxListView, for this example, an instance of NationalPark. Creating the MasterView activity Next, create MasterView, which is an MvxActivity instance that corresponds with MasterViewModel. To create MasterView, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, navigate to Add | New File. In the New File dialog, navigate to Android | Activity, enter MasterView in the Name field, and select New. Modify the class specification so that it inherits from MvxActivity; you will also need to add a few using directives as follows: using Cirrious.MvvmCross.Droid.Views; using NationalParks.Core.ViewModels; . . . namespace NationalParks.Droid.Views {    [Activity(Label = "Parks")]    public class MasterView : MvxActivity    {        . . .    } } Open Setup.cs and add code to initialize the file handler and path for the NationalParksData singleton to the CreateApp() method, as follows: protected override IMvxApplication CreateApp() {    NationalParksData.Instance.FileHandler =        new FileHandler ();    NationalParksData.Instance.DataDir =        System.Environment.GetFolderPath(          System.Environment.SpecialFolder.MyDocuments);    return new Core.App(); } Compile and run the app; you will need to copy the NationalParks.json file to the device or emulator using the Android Device Monitor. All the parks in NationalParks.json should be displayed. Implementing the detail view Now that we have the master list view displaying national parks, we can focus on creating the detail view. We will follow the same steps for the detail view as the ones we just completed for the master view. Creating DetailViewModel We start creating DetailViewModel by using the following steps: Following the same procedure as the one that was used to create MasterViewModel, create a new ViewModel named DetailViewModel in the ViewModel folder of NationalParks.Core. Add a NationalPark property to support data binding for the view controls, as follows: protected NationalPark _park; public NationalPark Park {    get { return _park; }    set { _park = value;          RaisePropertyChanged(() => Park);      } } Create a Parameters class that can be used to pass a park ID for the park that should be displayed. It's convenient to create this class within the class definition of the ViewModel that the parameters are for: public class DetailViewModel : MvxViewModel {    public class Parameters    {        public string ParkId { get; set; }    }    . . . Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData: public void Init(Parameters parameters) {    Park = NationalParksData.Instance.Parks.        FirstOrDefault(x => x.Id == parameters.ParkId); } Updating the Detail.axml layout Next, we will update the layout file. The main changes that need to be made are to add data binding specifications to the layout file. To update the Detail.axml layout, perform the following steps: Open Detail.axml and add the project namespace to the XML file: Add data binding specifications to each of the TextView elements that correspond to a national park property, as demonstrated for the park name: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/nameTextView"    local_MvxBind="Text Park.Name" /> Creating the DetailView activity Now, create the MvxActivity instance that will work with DetailViewModel. To create DetailView, perform the following steps: Following the same procedure as the one that was used to create MasterView, create a new view named DetailView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that our menus will be accessible. Copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials)[AR4] . Comment out the section in OnOptionsItemSelect() related to the Edit action for now; we will fill that in once the edit view is completed. Adding navigation The last step is to add navigation so that when an item is clicked on in MvxListView on MasterView, the park is displayed in the detail view. We will accomplish this using a command property and data binding. To add navigation, perform the following steps: Open MasterViewModel and add an IMvxCommand property; this will be used to handle a park that is being selected: protected IMvxCommand ParkSelected { get; protected set; } Create an Action delegate that will be called when the ParkSelected command is executed, as follows: protected void ParkSelectedExec(NationalPark park) {    ShowViewModel<DetailViewModel> (        new DetailViewModel.Parameters ()            { ParkId = park.Id }); } Initialize the command property in the constructor of MasterViewModel: ParkClicked =    new MvxCommand<NationalPark> (ParkSelectedExec); Now, for the last step, add a data binding specification to MvvListView in Master.axml to bind the ItemClick event to the ParkClicked command on MasterViewModel, which we just created: local:MvxBind="ItemsSource Parks; ItemClick ParkClicked" Compile and run the app. Clicking on a park in the list view should now navigate to the detail view, displaying the selected park. Implementing the edit view We are now almost experts at implementing new Views and ViewModels. One last View to go is the edit view. Creating EditViewModel Like we did previously, we start with the ViewModel. To create EditViewModel, complete the following steps: Following the same process that was previously used in this article to create EditViewModel, add a data binding property and create a Parameters class for navigation. Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData in the case of editing an existing park or create a new instance if the user has chosen the New action. Inspect the parameters passed in to determine what the intent is: public void Init(Parameters parameters) {    if (string.IsNullOrEmpty (parameters.ParkId))        Park = new NationalPark ();    else        Park =            NationalParksData.Instance.            Parks.FirstOrDefault(            x => x.Id == parameters.ParkId); } Updating the Edit.axml layout Update Edit.axml to provide data binding specifications. To update the Edit.axml layout, you first need to open Edit.axml and add the project namespace to the XML file. Then, add the data binding specifications to each of the EditView elements that correspond to a national park property. Creating the EditView activity Create a new MvxActivity instance named EditView to will work with EditViewModel. To create EditView, perform the following steps: Following the same procedure as the one that was used to create DetailView, create a new View named EditView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that the Done action will accessible from the ActionBar. You can copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials). Change the implementation of Done to call the Done command on EditViewModel. Adding navigation Add navigation to two places: when New (+) is clicked from MasterView and when Edit is clicked in DetailView. Let's start with MasterView. To add navigation from MasterViewModel, complete the following steps: Open MasterViewModel.cs and add a NewParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as follows: protected IMvxCommand NewParkClicked { get; set; } protected void NewParkClickedExec() { ShowViewModel<EditViewModel> (); } Note that we do not pass in a parameter class into ShowViewModel(). This will cause a default instance to be created and passed in, which means that ParkId will be null. We will use this as a way to determine whether a new park should be created. Now, it's time to hook the NewParkClicked command up to the actionNew menu item. We do not have a way to accomplish this using data binding, so we will resort to a more traditional approach—we will use the OnOptionsItemSelected() method. Add logic to invoke the Execute() method on NewParkClicked, as follows: case Resource.Id.actionNew:    ((MasterViewModel)ViewModel).        NewParkClicked.Execute ();    return true; To add navigation from DetailViewModel, complete the following steps: Open DetailViewModel.cs and add a EditParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as shown in the following code snippet: protected IMvxCommand EditPark { get; protected set;} protected void EditParkHandler() {    ShowViewModel<EditViewModel> (        new EditViewModel.Parameters ()            { ParkId = _park.Id }); } Note that an instance of the Parameters class is created, initialized, and passed into the ShowViewModel() method. This instance will in turn be passed into the Init() method on EditViewModel. Initialize the command property in the constructor for MasterViewModel, as follows: EditPark =    new MvxCommand<NationalPark> (EditParkHandler); Now, update the OnOptionsItemSelect() method in DetailView to invoke the DetailView.EditPark command when the Edit action is selected: case Resource.Id.actionEdit:    ((DetailViewModel)ViewModel).EditPark.Execute ();    return true; Compile and run NationalParks.Droid. You should now have a fully functional app that has the ability to create new parks and edit the existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailView. Creating the MvvmCross iOS app The process of creating the Android app with MvvmCross provides a solid understanding of how the overall architecture works. Creating the iOS solution should be much easier for two reasons: first, we understand how to interact with MvvmCross and second, all the logic we have placed in NationalParks.Core is reusable, so that we just need to create the View portion of the app and the startup code. To create NationalParks.iOS, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog, navigate to C# | iOS | iPhone | Single View Application, enter NationalParks.iOS in the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.iOS and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.iOS as a result of adding the package. They are as follows: packages.config: This file contains a list of libraries associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView: This class is placed in the Views folder, which corresponds to the FirstViewModel instance created in NationalParks.Core. Setup: This class inherits from MvxTouchSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). AppDelegate.cs.txt: This class contains the sample startup code, which should be placed in the actual AppDelete.cs file. Implementing the iOS user interface We are now ready to create the user interface for the iOS app. The good news is that we already have all the ViewModels implemented, so we can simply reuse them. The bad news is that we cannot easily reuse the storyboards from our previous work; MvvmCross apps generally use XIB files. One of the reasons for this is that storyboards are intended to provide navigation capabilities and an MvvmCross app delegates that responsibility to ViewModel and presenter. It is possible to use storyboards in combination with a custom presenter, but the remainder of this article will focus on using XIB files, as this is the more common use. The screen layouts can be used as depicted in the following screenshot: We are now ready to get started. Implementing the master view The first view we will work on is the master view. To implement the master view, complete the following steps: Create a new ViewController class named MasterView by right-clicking on the Views folder of NationalParks.iOS and navigating to Add | New File | iOS | iPhone View Controller. Open MasterView.xib and arrange controls as seen in the screen layouts. Add outlets for each of the edit controls. Open MasterView.cs and add the following boilerplate logic to deal with constraints on iOS 7, as follows: // ios7 layout if (RespondsToSelector(new    Selector("edgesForExtendedLayout")))    EdgesForExtendedLayout = UIRectEdge.None; Within the ViewDidLoad() method, add logic to create MvxStandardTableViewSource for parksTableView: MvxStandardTableViewSource _source; . . . _source = new MvxStandardTableViewSource(    parksTableView,    UITableViewCellStyle.Subtitle,    new NSString("cell"),    "TitleText Name; DetailText Description",      0); parksTableView.Source = _source; Note that the example uses the Subtitle cell style and binds the national park name and description to the title and subtitle. Add the binding logic to the ViewDidShow() method. In the previous step, we provided specifications for properties of UITableViewCell to properties in the binding context. In this step, we need to set the binding context for the Parks property on MasterModelView: var set = this.CreateBindingSet<MasterView,    MasterViewModel>(); set.Bind (_source).To (vm => vm.Parks); set.Apply(); Compile and run the app. All the parks in NationalParks.json should be displayed. Implementing the detail view Now, implement the detail view using the following steps: Create a new ViewController instance named DetailView. Open DetailView.xib and arrange controls as shown in the following code. Add outlets for each of the edit controls. Open DetailView.cs and add the binding logic to the ViewDidShow() method: this.CreateBinding (this.nameLabel).    To ((DetailViewModel vm) => vm.Park.Name).Apply (); this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).        Apply (); this.CreateBinding (this.stateLabel).    To ((DetailViewModel vm) => vm.Park.State).Apply (); this.CreateBinding (this.countryLabel).    To ((DetailViewModel vm) => vm.Park.Country).        Apply (); this.CreateBinding (this.latLabel).    To ((DetailViewModel vm) => vm.Park.Latitude).        Apply (); this.CreateBinding (this.lonLabel).    To ((DetailViewModel vm) => vm.Park.Longitude).        Apply (); Adding navigation Add navigation from the master view so that when a park is selected, the detail view is displayed, showing the park. To add navigation, complete the following steps: Open MasterView.cs, create an event handler named ParkSelected, and assign it to the SelectedItemChanged event on MvxStandardTableViewSource, which was created in the ViewDidLoad() method: . . .    _source.SelectedItemChanged += ParkSelected; . . . protected void ParkSelected(object sender, EventArgs e) {    . . . } Within the event handler, invoke the ParkSelected command on MasterViewModel, passing in the selected park: ((MasterViewModel)ViewModel).ParkSelected.Execute (        (NationalPark)_source.SelectedItem); Compile and run NationalParks.iOS. Selecting a park in the list view should now navigate you to the detail view, displaying the selected park. Implementing the edit view We now need to implement the last of the Views for the iOS app, which is the edit view. To implement the edit view, complete the following steps: Create a new ViewController instance named EditView. Open EditView.xib and arrange controls as in the layout screenshots. Add outlets for each of the edit controls. Open EditView.cs and add the data binding logic to the ViewDidShow() method. You should use the same approach to data binding as the approach used for the details view. Add an event handler named DoneClicked, and within the event handler, invoke the Done command on EditViewModel:protected void DoneClicked (object sender, EventArgs e) {    ((EditViewModel)ViewModel).Done.Execute(); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for EditView, and assign the DoneClicked event handler to it, as follows: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Done,        DoneClicked), true); Adding navigation Add navigation to two places: when New (+) is clicked from the master view and when Edit is clicked on in the detail view. Let's start with the master view. To add navigation to the master view, perform the following steps: Open MasterView.cs and add an event handler named NewParkClicked. In the event handler, invoke the NewParkClicked command on MasterViewModel: protected void NewParkClicked(object sender,        EventArgs e) {    ((MasterViewModel)ViewModel).            NewParkClicked.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView and assign the NewParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Add,        NewParkClicked), true); To add navigation to the details view, perform the following steps: Open DetailView.cs and add an event handler named EditParkClicked. In the event handler, invoke the EditParkClicked command on DetailViewModel: protected void EditParkClicked (object sender,    EventArgs e) {    ((DetailViewModel)ViewModel).EditPark.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView, and assign the EditParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Edit,        EditParkClicked), true); Refreshing the master view list One last detail that needs to be taken care of is to refresh the UITableView control on MasterView when items have been changed on EditView. To refresh the master view list, perform the following steps: Open MasterView.cs and call ReloadData() on parksTableView within the ViewDidAppear() method of MasterView: public override void ViewDidAppear (bool animated) {    base.ViewDidAppear (animated);    parksTableView.ReloadData(); } Compile and run NationalParks.iOS. You should now have a fully functional app that has the ability to create new parks and edit existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailVIew. Considering the pros and cons After completing our work, we now have the basis to make some fundamental observations. Let's start with the pros: MvvmCross definitely increases the amount of code that can be reused across platforms. The ViewModels house the data required by the View, the logic required to obtain and transform the data in preparation for viewing, and the logic triggered by user interactions in the form of commands. In our sample app, the ViewModels were somewhat simple; however, the more complex the app, the more reuse will likely be gained. As MvvmCross relies on the use of each platform's native UI frameworks, each app has a native look and feel and we have a natural layer that implements platform-specific logic when required. The data binding capabilities of MvvmCross also eliminate a great deal of tedious code that would otherwise have to be written. All of these positives are not necessarily free; let's look at some cons: The first con is complexity; you have to learn another framework on top of Xamarin, Android, and iOS. In some ways, MvvmCross forces you to align the way your apps work across platforms to achieve the most reuse. As the presentation logic is contained in the ViewModels, the views are coerced into aligning with them. The more your UI deviates across platforms; the less likely it will be that you can actually reuse ViewModels. With these things in mind, I would definitely consider using MvvmCross for a cross-platform mobile project. Yes, you need to learn an addition framework and yes, you will likely have to align the way some of the apps are laid out, but I think MvvmCross provides enough value and flexibility to make these issues workable. I'm a big fan of reuse and MvvmCross definitely pushes reuse to the next level. Summary In this article, we reviewed the high-level concepts of MvvmCross and worked through a practical exercise in order to convert the national parks apps to use the MvvmCross framework and the increase code reuse. Resources for Article: Further resources on this subject: Kendo UI DataViz – Advance Charting [article] The Kendo MVVM Framework [article] Sharing with MvvmCross [article]
Read more
  • 0
  • 0
  • 7699

article-image-your-3d-world
Packt
26 Dec 2014
16 min read
Save for later

Your 3D World

Packt
26 Dec 2014
16 min read
 In this article by Ciro Cardoso, author of the book Mastering Lumion 3D, we will cover the following topics: The 3D models available Placing content Selecting different objects This article is the intermediate point of our project because we will cover everything we need to know to fully master Lumion's models. The bullet points provide a reasonable idea of what we will see, and by the end of this article, you will be able to think ahead and improve your workflow with what Lumion provides. (For more resources related to this topic, see here.) Lumion models – a quick overview You have to keep in mind that different versions of Lumion dictate what models are accessible. There is a substantial difference between Lumion and Lumion Pro, but even if you don't have Lumion Pro, there are some places where we can get free and paid 3D models. Different categories and what we can find Let's have a look at what is available and for this, we have to open the Objects menu that will give us access to eight libraries, but we only need five of them, as shown in the following screenshot: Each button represents a library where we can find different categories. The following list can give an overview of what each library contains: The Nature library: Inside this library, we can find several species of trees from Africa, Europe, and tropical trees, as well as grass, plants, flowers, cactus, and rocks. The Transport library: Here, we can find all forms of transport, from public transport to air balloons. The Indoor library: This is an important library that could be checked before modeling anything for interiors. We have the assorted objects, decoration items, electronics, appliances, food and drink, kitchen tools, interior lighting, taps, chairs and sofas, cabinets, tables, and utilities. The People and animals library: Here, we have people from different ethnic groups, 2D people, and animals. Please keep in mind that in this library, we have five types of objects: idle, walking, static, 2D cutout, and silhouettes. The Outdoor library: Here, we have elements to populate exterior scenes with objects found in a normal daily life. Some of them can actually add interesting details to a scene making it look more believable. These are the libraries that we will use to start populating the scene, but we still have to point out some differences between them, because not every model is 3D and not every model is static or idle. This knowledge helps us understand and make the decision of choosing the model that is appropriate for your scene. Idle, animated, and other 3D models As mentioned earlier, there are different types of models available in Lumion. We have 3D and 2D models and silhouettes, but the best way to comprehend the difference is by placing them in your scene and seeing how they behave. Let's start by clicking on the People and Animals button to activate this library, but then we have to select the Change object button to open the library, as shown in the following screenshot:   This button opens the library and we can start by selecting the Men-3D tab and select the first 3D model called Man_African_0001_Idle. Select this model or any 3D model with the idle suffix, and we are back to the Build mode where we have to click on the left mouse button to place the 3D model. Repeat the same step for: Man_African_0001_Walk under the Men-3D tab Any model from the People – 2D – High Detail tab Any model from the People – 3D – Silhouettes tab Any model from the People – 2D – Silhouettes tab The idea is having something like this in your scene:   After placing these models, it is easy to understand the difference between each one. Perhaps the most significant aspect we have to point out is the different results we will get by using an idle or walk model. Both of the 3D models are animated as you can confirm while placing them, but with a walk 3D model, we can later animate them to walk around the scene. On the other hand, the idle 3D model is static, but we still have some loop animations that give life to the 3D model. An additional aspect is that the 2D models are permanently facing the camera and unfortunately, there is no way to switch off this option, but we can change the color of the 2D model if necessary. So, now that we know what is available, what is the next step? Start placing models and populate the entire scene with life, but let's have a look at some key points that will help you improve the workflow. Placing and controlling 3D models in Lumion Where do we start? Well, this is something entirely personal, although it is a good idea to start working with bigger 3D models and then gradually moving down until the final 3D models are just minor details and touches to transform the scene into a professional project. If we focus our attention only in one section, the problem may be a lack of time to add the same quality to other areas in the scene. Placing a 3D model from Lumion's library The process of placing a 3D model is simple, as we can see from the following composition of images:   We will start by clicking on the Objects menu and choosing the correct library. As shown in the previous screenshot, we selected the Nature library and after that clicked on the Change object button to open the Nature library. Once in the library, we have to navigate to the desired tab and click on the thumbnail to select the 3D model. Back to the Build mode, we have to click on the left mouse button to place the 3D model. When placing a 3D model, Lumion recognizes surfaces and avoids any intersection between the 3D model and the surface. Sometimes, this feature can be in our way and cause some difficulties to place a 3D model. To bypass this problem, press and hold the G key, and then click on the left mouse button to place the 3D model in the terrain. Great! We placed the first Lumion model in the scene. One model is placed and how many more do we need to place? It depends on what project you have, and if it is something small like the example shown in this book, placing 3D models is not a big issue. However, when working on large projects, placing the 3D models can be a massive and repetitive task, but don't forget that Lumion is a user-friendly application and provides tools that help with repetitive tasks. Placing multiple copies with one click Some 3D models may require several copies to create a more believable look, such as trees, bushes, flowers, and other elements. Imagine that you had to place tons of copies of the same model one by one. As mentioned, Lumion has some shortcuts that will help you save time and thereby not lose patience. What do we have to do? Before placing a 3D model, press and hold the Ctrl key, and then click on the left mouse button to place 10 copies of the 3D model selected, as shown in the following screenshot:   However, there is a slight downside to this technique as you probably must have noticed from the previous screenshot. The disadvantage is that we don't have control over the area where the 3D models are scattered and the distance between them; some of the 3D models may intersect with each other. With smaller 3D models, this technique is useful because the 3D models tend not to be far from each other, and with bigger 3D models, we can use the Ctrl key to place the 10 copies and then adjust accordingly. Another shortcut that is worth keeping in mind is the Z key. If you press and hold the Z key and then click to place the 3D model and then click again, the next 3D model will have a different size. Consequently, a powerful combination is Ctrl + Z + the left mouse button to place 10 copies with different sizes, as shown in the following screenshot:   Why is this useful? This is a fair question. The best answer is to take some time to look away from the screen to outside and observe that we are surrounded by randomness. We hardly could find two trees with the same size and shape, even from the same species. Our eyes are a perfect mechanism to spot things that look repetitive. However, we don't have the time and possibilities to create each tree different from the other, but we can cheat. Cheating is OK in 3D because it helps us gain time to concentrate our attention in other areas equally important to accomplish the perfect image still or movie. One way of cheating is by using the same tree, but changing the rotation, scale, and color. With the combination of Ctrl + Z + the left mouse button, we have the opportunity to at least change the scale of each copy placed in the scene. If you look at the previous screenshot, the plants presented when arranged and placed in the correct location will look much more natural than using the same scale for the 3D model. However, how can we manipulate and control the 3D models placed in the scene? Tweaking the 3D models To place any model from Lumion's library, we have to use the left mouse button and if, instead of release, we hold the left mouse button and drag it, it is possible to change the location where the 3D model is going to be placed, as shown in the following screenshot:   However, we need more control than this and the previous screenshot also shows where we can find the tools to move, scale, rotate, and change the height of the 3D model. The tools and shortcuts we use for imported 3D models are precisely the same for Lumion's native 3D models. As a quick reminder, here is the list of the shortcuts to tweak the position, scale, and rotation of the 3D model: M: Move the 3D model L: Scale the 3D model R: Rotate the 3D model heading P: Rotate the 3D model pitch B: Rotate the 3D model bank H: Change the heights However, there is an aspect that needs to be kept in mind all the time to tweak and control the 3D models in a project. This is something that is crucial and if you are new to Lumion, it is perfectly normal that on the first few tries, you will get frustrated because you cannot select the 3D model. This may sound annoying, but in truth, this is a way Lumion helps us in not becoming overweight and confused when trying to select a 3D model by providing a narrow control. With this in mind, the next section will show a few tricks and techniques that are useful with Lumion's models and techniques. This will help to improve the way we work with the 3D models and fully master this stage in the production. The remarkable Context menu We can call it remarkable because this menu gives us the full control and shortcuts to rearrange the 3D models present in the scene. This menu is divided into two very distinct sections: the Selection and Transformation submenus, as shown in the following screenshot:   How can these menus be useful to your project? Let's start with the Selection submenu and when we select it the following options appear:   However, looking at these options doesn't help you understand how they can be so useful and powerful. Let's check how they work. Selection – library Working with the Nature library has some challenges and one of them is trying to identify a tree or other plant that we already placed in the scene. This can be really difficult, in particular when there are other models very similar to the one we are looking for. The Library... option found under the Selection submenu can give a hand with this task and will make your life easy. Locate the 3D model you want and using the Context menu, click on the Selection submenu. Select the Library… option and another two options appear. We need to choose the Select in library option and automatically the Change object button changes to the 3D model, as shown in the following screenshot:   Sometimes, when we click on the Change object button to access the library, we have to select each tab to find where the 3D model is, but once we have the correct tab, it is easy to recognize the 3D model because of the halo around the thumbnail. However, there was another option called Replace with library selection. Now, this is when you start to see the full potential of Lumion and how these options will greatly improve the speed of your workflow. Picking the example on the previous screenshot, we can see that the plant used is called FicusElastica_001. Then, we realized that this is the wrong 3D model, but on the other hand, the location is correct and we don't want to change the location even a few millimeters. The Replace with library selection option is our salvation; so, let's see how we can use it. The first thing to do is to open the Nature library and select the correct 3D model, which in this case will be FicusElastica_003. After selecting this 3D model, we are back to the Build mode, but instead of placing the 3D model, we will select the Context menu and pick the 3D model that needs to be replaced. Then, click on the Selection submenu, next the Library... option and then click on the Replace with library selection button, as exemplified on the next screenshot:   In addition to this fantastic feature, Lumion will keep not only the location, but also the rotation and scale of the previous 3D model. Can you imagine how easy it is to replace a species of a tree or another model in the scene if the client doesn't like it? What about the other selection options? How can we use them? Selection – all the Selection options The options using which we have access to the Selection submenu are simple, particularly, the ones related to deselecting a 3D model. The Selection option is another way where we can select a 3D model, but using the Ctrl key is much faster. However, there are two selection options that can be used to select a wide range of 3D models or having a narrower control over what we select. Let's say that we totally forgot to use a layer for the 3D models we placed and again we can pick the example that we have been using with the FicusElastica_001 model. We need to place all the FicusElastica_001 models inside a layer, but we have several models scattered around the scene. One way to tackle this issue is by selecting each model and then moving these to a new layer. The smart way is by using the Select All Similar option because when we use this option all the FicusElastica_001 models are selected, as shown in the following screenshot:   In this case, we only have two copies of the FicusElastica_001 model, but it is easy to understand how powerful this option can be and how easy it is to select a certain amount of 3D models in seconds. To know that a 3D model is selected, a blue wire box is drawn around the 3D model. Eventually, we will realize that we need to copy every single tree, plant, flower, and rock from one point to another. This is a massive task to be accomplished using only the normal selection technique. If you think for a second all of these models are part of the same library, the Nature library. So, instead of using the Select All Similar option, we will use the Select All Similar Category option, as shown in the following screenshot:   As you can see in the previous screenshot, every single 3D model from the Nature library was selected and ready to be controlled in the way we need and want. To deselect the 3D models, we can make use of the Deselect All option, but a quicker way is by pressing and holding the Ctrl key and clicking wherever you want. However, what if we need to select models from different categories? Selecting different categories Selecting different categories is not something that we can find in the Context menu, but since we are talking about selecting 3D models, you may find this trick useful. Have a closer look at the following screenshot:   In the previous screenshot, we can see models from three different categories: Indoor, People and Animals, and Outdoor. How is that possible? It is easier than you think. Start by selecting the models found in the Indoor category. The next step is selecting the People and Animals category, and by pressing and holding the Ctrl key, select the 3D models you want in this category. Repeat the same step for as many categories as you like, but remember that you need to select individual models and not draw a selection rectangle. This principle also works if we select the models with the Context menu. Another option to quickly select and transform a 3D model is by pressing the F12 key. When we do this, every 3D model becomes available to be selected and we can change the location, rotation, and the height of the 3D model. Now that we have the 3D models selected, what can we do with them? Let's explore the marvelous Transformation submenu. Summary In this article, we saw how to place and select 3D objects using different keys. We also saw how to create your own 3D world using different tools and techniques. Resources for Article:  Further resources on this subject: Integrating Direct3D with XAML and Windows 8.1 [article] Diving Straight into Photographic Rendering [article] What is Lumion? [article]
Read more
  • 0
  • 0
  • 1316

article-image-using-phpstorm-team
Packt
26 Dec 2014
11 min read
Save for later

Using PhpStorm in a Team

Packt
26 Dec 2014
11 min read
In this article by Mukund Chaudhary and Ankur Kumar, authors of the book PhpStorm Cookbook, we will cover the following recipes: Getting a VCS server Creating a VCS repository Connecting PhpStorm to a VCS repository Storing a PhpStorm project in a VCS repository (For more resources related to this topic, see here.) Getting a VCS server The first action that you have to undertake is to decide which version of VCS you are going to use. There are a number of systems available, such as Git and Subversion (commonly known as SVN). It is free and open source software that you can download and install on your development server. There is another system named concurrent versions system (CVS). Both are meant to provide a code versioning service to you. SVN is newer and supposedly faster than CVS. Since SVN is the newer system and in order to provide information to you on the latest matters, this text will concentrate on the features of Subversion only. Getting ready So, finally that moment has arrived when you will start off working in a team by getting a VCS system for you and your team. The installation of SVN on the development system can be done in two ways: easy and difficult. The difficult step can be skipped without consideration because that is for the developers who want to contribute to the Subversion system. Since you are dealing with PhpStorm, you need to remember the easier way because you have a lot more to do. How to do it... The installation step is very easy. There is this aptitude utility available with Debian-based systems, and there is the Yum utility available with Red Hat-based systems. Perform the following steps: You just need to issue the command apt-get install subversion. The operating system's package manager will do the remaining work for you. In a very short time, after flooding the command-line console with messages, you will have the Subversion system installed. To check whether the installation was successful, you need to issue the command whereis svn. If there is a message, it means that you installed Subversion successfully. If you do not want to bear the load of installing Subversion on your development system, you can use commercial third-party servers. But that is more of a layman's approach to solving problems, and no PhpStorm cookbook author will recommend that you do that. You are a software engineer; you should not let go easily. How it works... When you install the version control system, you actually install a server that provides the version control service to a version control client. The subversion control service listens for incoming connections from remote clients on port number 3690 by default. There's more... If you want to install the older companion, CVS, you can do that in a similar way, as shown in the following steps: You need to download the archive for the CVS server software. You need to unpack it from the archive using your favorite unpacking software. You can move it to another convenient location since you will not need to disturb this folder in the future. You then need to move into the directory, and there will start your compilation process. You need to do #. /configure to create the make targets. Having made the target, you need to enter #make install to complete the installation procedure. Due to it being older software, you might have to compile from the source code as the only alternative. Creating a VCS repository More often than not, a PHP programmer is expected to know some system concepts because it is often required to change settings for the PHP interpreter. The changes could be in the form of, say, changing the execution time or adding/removing modules, and so on. In order to start working in a team, you are going to get your hands dirty with system actions. Getting ready You will have to create a new repository on the development server so that PhpStorm can act as a client and get connected. Here, it is important to note the difference between an SVN client and an SVN server—an SVN client can be any of these: a standalone client or an embedded client such as an IDE. The SVN server, on the other hand, is a single item. It is a continuously running process on a server of your choice. How to do it... You need to be careful while performing this activity as a single mistake can ruin your efforts. Perform the following steps: There is a command svnadmin that you need to know. Using this command, you can create a new directory on the server that will contain the code base in it. Again, you should be careful when selecting a directory on the server as it will appear in your SVN URL for the rest part of your life. The command should be executed as: svnadmin create /path/to/your/repo/ Having created a new repository on the server, you need to make certain settings for the server. This is just a normal phenomenon because every server requires a configuration. The SVN server configuration is located under /path/to/your/repo/conf/ with the name svnserve.conf. Inside the file, you need to make three changes. You need to add these lines at the bottom of the file: anon-access = none auth-access = write password-db = passwd There has to be a password file to authorize a list of users who will be allowed to use the repository. The password file in this case will be named passwd (the default filename). The contents in the file will be a number of lines, each containing a username and the corresponding password in the form of username = password. Since these files are scanned by the server according to a particular algorithm, you don't have the freedom to leave deliberate spaces in the file—there will be error messages displayed in those cases. Having made the appropriate settings, you can now make the SVN service run so that an SVN client can access it. You need to issue the command svnserve -d to do that. It is always good practice to keep checking whether what you do is correct. To validate proper installation, you need to issue the command svn ls svn://user@host/path/to/subversion/repo/. The output will be as shown in the following screenshot:   How it works... The svnadmin command is used to perform admin tasks on the Subversion server. The create option creates a new folder on the server that acts as the repository for access from Subversion clients. The configuration file is created by default at the time of server installation. The contents that are added to the file are actually the configuration directives that control the behavior of the Subversion server. Thus, the settings mentioned prevent anonymous access and restrict the write operations to certain users whose access details are mentioned in a file. The command svnserve is again a command that needs to be run on the server side and which starts the instance of the server. The -d switch mentions that the server should be run as a daemon (system process). This also means that your server will continue running until you manually stop it or the entire system goes down. Again, you can skip this section if you have opted for a third-party version control service provider. Connecting PhpStorm to a VCS repository The real utility of software is when you use it. So, having installed the version control system, you need to be prepared to use it. Getting ready With SVN being client-server software, having installed the server, you now need a client. Again, you will have difficulty searching for a good SVN client. Don't worry; the client has been factory-provided to you inside PhpStorm. The PhpStorm SVN client provides you with features that accelerate your development task by providing you detailed information about the changes made to the code. So, go ahead and connect PhpStorm to the Subversion repository you created. How to do it... In order to connect PhpStorm to the Subversion repository, you need to activate the Subversion view. It is available at View | Tool Windows | Svn Repositories. Perform the following steps to activate the Subversion view: 1. Having activated the Subversion view, you now need to add the repository location to PhpStorm. To do that, you need to use the + symbol in the top-left corner in the view you have opened, as shown in the following screenshot: Upon selecting the Add option, there is a question asked by PhpStorm about the location of the repository. You need to provide the full location of the repository. Once you provide the location, you will be able to see the repository in the same Subversion view in which you have pressed the Add button. Here, you should always keep in mind the correct protocol to use. This depends on the way you installed the Subversion system on the development machine. If you used the default installation by installing from the installer utility (apt-get or aptitude), you need to specify svn://. If you have configured SVN to be accessible via SSH, you need to specify svn+ssh://. If you have explicitly configured SVN to be used with the Apache web server, you need to specify http://. If you configured SVN with Apache over the secure protocol, you need to specify https://. Storing a PhpStorm project in a VCS repository Here comes the actual start of the teamwork. Even if you and your other team members have connected to the repository, what advantage does it serve? What is the purpose solved by merely connecting to the version control repository? Correct. The actual thing is the code that you work on. It is the code that earns you your bread. Getting ready You should now store a project in the Subversion repository so that the other team members can work and add more features to your code. It is time to add a project to version control. It is not that you need to start a new project from scratch to add to the repository. Any project, any work that you have done and you wish to have the team work on now can be added to the repository. Since the most relevant project in the current context is the cooking project, you can try adding that. There you go. How to do it... In order to add a project to the repository, perform the following steps: You need to use the menu item provided at VCS | Import into version control | Share project (subversion). PhpStorm will ask you a question, as shown in the following screenshot: Select the correct hierarchy to define the share target—the correct location where your project will be saved. If you wish to create the tags and branches in the code base, you need to select the checkbox for the same. It is good practice to provide comments to the commits that you make. The reason behind this is apparent when you sit down to create a release document. It also makes the change more understandable for the other team members. PhpStorm then asks you the format you want the working copy to be in. This is related to the version of the version control software. You just need to smile and select the latest version number and proceed, as shown in the following screenshot:   Having done that, PhpStorm will now ask you to enter your credentials. You need to enter the same credentials that you saved in the configuration file (see the Creating a VCS repository recipe) or the credentials that your service provider gave you. You can ask PhpStorm to save the credentials for you, as shown in the following screenshot:   How it works... Here it is worth understanding what is going on behind the curtains. When you do any Subversion related task in PhpStorm, there is an inbuilt SVN client that executes the commands for you. Thus, when you add a project to version control, the code is given a version number. This makes the version system remember the state of the code base. In other words, when you add the code base to version control, you add a checkpoint that you can revisit at any point in future for the time the code base is under the same version control system. Interesting phenomenon, isn't it? There's more... If you have installed the version control software yourself and if you did not make the setting to store the password in encrypted text, PhpStorm will provide you a warning about it, as shown in the following screenshot: Summary We got to know about version control systems, step-by-step process to create a VCS repository, and connecting PhpStorm to a VCS repository. Resources for Article:  Further resources on this subject: FuelPHP [article] A look into the high-level programming operations for the PHP language [article] PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1 [article]
Read more
  • 0
  • 0
  • 4515
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-downloading-and-understanding-construct-2
Packt
26 Dec 2014
19 min read
Save for later

Downloading and Understanding Construct 2

Packt
26 Dec 2014
19 min read
In this article by Aryadi Subagio, the author of Learning Construct 2, introduces you to Construct 2, makes you familiar with the interface and terms that Construct 2 uses, as well as gives you a quick overview of the event system. (For more resources related to this topic, see here.) About Construct 2 Construct 2 is an authoring tool that makes the process of game development really easy. It can be used by a variety of people, from complete beginners in game development to experts who want to make a prototype quickly or even use Construct 2 to make games faster than ever. It is created by Scirra Ltd, a company based in London, and right now, it can run on the Windows desktop platform, although you can export your games to multiple platforms. Construct 2 is an HTML5-based game editor with a lot of features enough for people beginning to work with game development to make their first 2D game. Some of them are: Multiple platforms to target: You can publish your game to desktop computers (PC, Mac, or Linux), to many mobile platforms (Android, iOS, Blackberry, Windows Phone 8.0, Tizen, and much more), and also on websites via HTML5. Also, if you have a developer's license, you can publish it on Nintendo's Wii U. No programming language required: Construct 2 doesn't use any programming language that is difficult to understand; instead, it relies on its event system, which is really easy for anyone, even without coding experience, to jump in. Built-in physics: Using Construct 2 means you don't need to worry about complicated physics functions; it's all built in Construct 2 and is easy to use! Can be extended (extensible): Many plugins have been written by third-party developers to add new functionalities to Construct 2. Note that writing plugins is outside the scope of this book. If you have a JavaScript background and want to try your hand at writing plugins, you can access the JavaScript SDK and documentation at https://www.scirra.com/manual/15/sdk. Special effects: There are a lot of built-in effects to make your game prettier! You can use Construct 2 to virtually create all kinds of 2D games, from platformer, endless run, tower defense, casual, top-down shooting, and many more. Downloading Construct 2 Construct 2 can be downloaded from Scirra's website (https://www.scirra.com/), which only requires you to click on the download button in order to get started. At the time of writing this book, the latest stable version is r184, and this tutorial is written using this version. Another great thing about Construct 2 is that it is actively developed, and the developer frequently releases beta features to gather feedback and perform bug testing. There are two different builds of Construct 2: beta build and stable build. Choosing which one to download depends on your preference when using Construct 2. If you like to get your hands on the latest features, then you should choose the beta build; just remember that beta builds often have bugs. If you want a bug-proof version, then choose the stable build, but you won't be the first one to use the new features. The installation process is really straightforward. You're free to skip this section if you like, because all you need to do is open the file and follow the instructions there. If you're installing a newer version of Construct 2, it will uninstall the older version automatically for you! Navigating through Construct 2 Now that we have downloaded and installed Construct 2, we can start getting our hands dirty and make games with it! Not so fast though. As Construct 2's interface is different compared to other game-making tools, we need to know how to use it. When you open Construct 2, you will see a start page as follows:   This start page is basically here to make it easier for you to return to your most recent projects, so if you just opened Construct 2, then this will be empty. What you need to pay attention to is the new project link on the left-hand side; click on it, and we'll start making games. Alternatively, you can click on File in the upper-left corner and then click on New. You'll see a selection of templates to start with, so understandably, this can be confusing if you don't know which one to pick. So, for now, just click on New empty project and then click on Open. Starting an empty project is good when you want to prototype your game. What you see in the screenshot now is an empty layout, which is the place where we'll make our games. This also represents how your game will look. It might be confusing to navigate the first time you see this, but don't worry; I'll explain everything you need to know for now by describing it piece by piece. The white part in the middle is the layout, because Construct 2 is a what you see is what you get kind of tool. This part represents how your game will look in the end. The layout is like your canvas; it's your workspace; it is where you design your levels, add your enemies, and place your floating coins. It is where you make your game. The take-home point here is that the layout size is not the same as the window size! The layout size can be bigger than the window size, but it can't be smaller than the window size. This is because the window size represents the actual game window. The dotted line is the border of the window size, so if you put a game object outside it, it won't be initially visible in the game, unless you scroll towards it. In the preceding screenshot, only the red plane is visible to the player. Players don't see the green spaceship because it's outside the game window. On the right-hand side, we have the Projects bar and the Objects bar. An Objects bar shows you all the objects that are used in the active layout. Note that an active layout is the one you focused on right now; this means that, at this very instance, we only have one layout. The Objects bar is empty because we haven't added any objects. The Projects bar helps in the structuring of your project, and it is structured as follows: All layouts are stored in the Layouts folder. Event sheets are stored in the Event sheets folder. All objects that are used in the project are stored in the Object types folder. All created families are in the Families folder. A family is a feature of Construct 2. The Sounds folder contains sound effects and audio files. The Music folder contains long background music. The difference between the Sounds folder and the Music folder is that the contents in the Music folder are streamed, while the files inside the Sounds folder are downloaded completely before they are played. This means if you put a long music track in the Sounds folder, it will take a few minutes for it to be played, but in the Music folder, it is immediately streamed. However, it doesn't mean that the music will be played immediately; it might need to buffer before playing. The Files folder contains other files that don't fit into the folders mentioned earlier. One example here is Icons. Although you can't rename or delete these folders, you can add subfolders inside them if you want. On the left-hand side, we have a Properties bar. There are three kinds of properties: layout properties, project properties, and object properties. The information showed in the Properties bar depends on what you clicked last. There is a lot of information here, so I think it's best to explain it as we go ahead and make our game, but for now, you can click on any part of the Properties bar and look at the bottom part of it for help. I'll just explain a bit about some basic things in the project properties: Name: This is your project's name; it doesn't have to be the same as the saved file's name. So, you can have the saved file as project_wing.capx and the project's name as Shadow wing. Version: This is your game's version number if you plan on releasing beta versions; make sure to change this first Description: Your game's short description; some application stores require you to fill this out before submitting ID: This is your game's unique identification; this comes in the com.companyname.gamename format, so your ID would be something like com.redstudio.shadowwing. Creating game objects To put it simply, everything in Construct 2 is a game object. This can range from the things that are visible on screen, which, for example, are sprites, text, particles, and sprite font, to the things that are not visible but are still used in the game, such as an array, dictionary, keyboard, mouse, gamepad, and many more. To create a new game object, you can either double-click anywhere on a layout (not on another object already present), or you can right-click on your mouse and select Insert new object. Doing either one of these will open an Insert New Object dialog, where you can select the object to be inserted. You can click on the Insert button or double-click on the object icon to insert it. There are two kinds of objects here: the objects that are inserted into the active layout and the objects that are inserted into the entire project. Objects that are visible on the screen are inserted into the active layout, and objects that are not visible on the screen are inserted into the entire project. If you look closely, each object is separated into a few categories such as Data & Storage, Form controls, General, and so on. I just want to say that you should pay special attention to the objects in the Form controls category. As the technology behind it is HTML5 and a Construct 2 game is basically a game made in JavaScript, objects such as the ones you see on web pages can be inserted into a Construct 2 game. These objects are the objects in the Form controls category. A special rule applies to the objects: we can't alter their layer order. This means that these objects are always on top of any other objects in the game. We also can't export them to platforms other than web platforms. So, if you want to make a cross-platform game, it is advised not to use the Form controls objects. For now, insert a sprite object by following these steps: After clicking on the Insert button, you will notice that your mouse cursor becomes a crosshair, and there's a floating label with the Layer 0 text. This is just a way for Construct 2 to tell you which layer you're adding to your object. Click your mouse to finally insert the object. Even if you add your object to a wrong layer, you can always move it later. When adding any object with a visual representation on screen, such as a sprite or a tiled background, Construct 2 automatically opens up its image-editing window. You can draw an image here or simply load it from a file created using a software. Click on X in the top-right corner of the window to close the window when you have finished drawing. You shouldn't worry here; this won't delete your object or image. Adding layers Layers are a great way to manage your objects' visual hierarchy. You can also add some visual effects to your game using layers. By default, your Layers bar is located at the same place as the Projects bar. You'll see two tabs here: Projects and Layers. Click on the Layers tab to open the Layers bar. From here, you can add new layers and rename, delete, and even reorganize them to your liking. You can do this by clicking on the + icon a few times to add new layers, and after this, you can reorganize them by dragging a layer up or down. Just like with Adobe products, you can also toggle the visibility of all objects in the same layer to make it easier while you're developing games. If you don't want to change or edit all objects in the same layer, which might be on a background layer for instance, you can lock this layer. Take a look at the following screenshot: There are two ways of referring to a layer: using its name (Layer 0, Layer 1, Layer 2, Layer 3) or its index (0, 1, 2, 3). As you can see from the previous screenshot, the index of a layer changes as you move a layer up or down its layer hierarchy (the layer first created isn't always the one with the index number 0). The layer with index 0 will always be at the bottom, and the one with the highest index will always be at the top, so remember this because it will come in handy when you make your games. The eye icon determines the visibility of the layer. Alternatively, you can also check the checkbox beside each layer's name. Objects from the invisible layer won't be visible in Construct 2 but will still visible when you play the game. The lock icon, beside the layer's name at the top, toggles between whether a layer is locked or not, so objects from locked layers can't be edited, moved, and selected. What is an event system? Construct 2 doesn't use a traditional programming language. Instead, it uses a unique style of programming called an event system. However, much like traditional programming languages, it works as follows: It executes commands from top to bottom It executes commands at every tick It has variables (a global and a local variable) It has a feature called function, which works in the same way as other functions in a traditional programming language, without having to go into the code An event system is used to control the objects in a layout. It can also be used to control the layout itself. An event system can be found inside the event sheet; you can access it by clicking on the event sheet tab at the top of the layout. Reading an event system I hope I haven't scared you all with the explanations of an event system. Please don't worry because it's really easy! There are two components to an event system: an event and an action. Events are things that occur in the game, and actions are the things that happen when there is an event. For a clearer understanding, take a look at the following screenshot where the event is taken from one of my game projects: The first event, the one with number 12, is a bullet on collision with an enemy, which means when any bullet collides with any enemy, the actions on its right-hand side will be executed. In this case, it will subtract the enemy's health, destroy the bullet object, and create a new object for a damage effect. The next event, number 13, is what happens when an enemy's health drops below zero; the actions will destroy the enemy and add points to the score variable. This is easy, right? Take a look at how we created the redDamage object; it says on layer "Game". Every time we create a new object through an action, we also need to specify on which layer it is created. As mentioned earlier, we can refer to a layer with its name or with its index number, so either way is fine. However, I usually use a layer's name, just in case I need to rearrange the layer's hierarchy later. If we use the layer's index (for example, index 1) we can rearrange the layer so that index 1 is different; this means that we will end up creating objects in the wrong layer. Earlier, I said that an event system executes commands from top to bottom. This is true except for one kind of event: a trigger. A trigger is an event that, instead of executing at every tick, waits for something to happen before it is executed. Triggers are events with a green arrow beside them (like the bullet on collision with enemy event shown earlier). As a result of this, unlike the usual events, it doesn't matter where the triggers are placed in the event system. Writing events Events are written on event sheets. When you create a new layout, you can choose to add a new event sheet to this new layout. If you choose to add an event sheet, you can rename it to the same name or one that is different from the layout. However, it is advised that you name the event sheets exactly same as the layout to make it clear which event sheet is associated with a layout. We can only link one event sheet to a layout from its properties, so if we want to add more event sheets to a layout, we must include them in that event sheet. To write an event, just perform the following steps: Click on the event sheet tab above the layout. You'll see an empty event sheet; to add events, simply click on the Add event link or right-click and select Add event. Note that from now, on I will refer to the action of adding a new step with words such as add event, add new event, or something similar. You'll see a new window with objects to create an event from; every time you add an event (or action), Construct 2 always gives you objects you can add an event (or action) from. This prevents you from doing something impossible, for example, trying to modify the value of a local variable outside of its scope. I will explain local variables shortly. Whether or not you have added an object, there will always be a system object to create an event from. This contains a list of events that you create directly from the game instead of from an object. Double-click on it, and you'll see a list of events you can create with a system object. There are a lot of events, and explaining them can take a long time. For now, if you're curious, there is an explanation of each event in the upper part of the window. Next, scroll down and look for an Every x seconds event. Double-click on it, enter 1.0 second, and click on Done. You should have the following event: To add an action to an event, just perform the following steps: Click on the Add action link beside an event. Click on an object you want to create an action from; for now, double-click on the systems object. Double-click on the Set layer background color action under the Layers & Layout category. Change the three numbers inside the bracket to 100, 200, and 50. Click on the Done button. You should have the following event: This action will change the background color of layer 0 to the one we set in the parameter, which is green. Also, because adding a screenshot every time gives you a code example, which would be troublesome, I will write my code example as follows: System every 1.0 seconds | System Restart layout The left-hand side of the code is the event, and the right-hand side of the code is the action. I think this is pretty clear. Creating a variable I said that I'm going to explain variables, and you might have noticed a global and local variables category when you added an action. A variable is like a glass or cup, but instead of water, it holds values. These values can be one of three types: Text, Number, or Boolean. Text: This type holds a value of letters, words, or a sentence. This can include numbers as well, but the numbers will be treated like a part of the word. Number: This type holds numerical values and can't store any alphabetical value. The numbers are treated like numbers, which means that mathematical operations can be performed on them. Boolean: This type only holds one of the two values: True or False. This is used to check whether a certain state of an object is true or false. To create a global variable, just right-click in an event sheet and select Add global variable. After that, you'll see a new window to add a global variable. Here's how to fill each field: Name: This is the name of the variable; no two variables can have the same name, and this name is case sensitive, which means exampleText is different from ExampleText. Type: This tells whether the variable is Text, Number, or Boolean. Only instance variables can have a Boolean type. Initial value: This is the variable's value when first created. A text type's value must be surrounded with a quote (" "). Description: This is an optional field; just in case the name isn't descriptive enough, additional explanation can be written here. After clicking on the OK button, you have created your new variable! This variable has a global scope; this means that it can be accessed from anywhere within the project, while a local variable only has a limited scope and can be accessed from a certain place in the event sheet. I will cover local variables in depth later in the book. You might have noticed that in the previous screenshot, the Static checkbox cannot be checked. This is because only local variables can be marked as static. One difference between global and local variables is that the local variable's value reverts to its initial value the next time the code is executed, while the global variable's value doesn't change until there's a code that changes it. A static local variable retains its value just like a global variable. All variables' values can be changed from events, both global and local, except the ones that are constant. Constant variables will always retain their initial value; they can never be changed. A constant variable can be used for a variable that has a value you don't want to accidentally rewrite later. Summary In this article, we learned about the features of Construct 2, its ease of use, and why it's perfect for people with no programming background. We learned about Construct 2's interface and how to create new layers in it. We know what objects are and how to create them. This article also introduced you to the event system and showed you how to write code in it. Now, you are ready to start making games with Construct 2! Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Introducing variables [article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article]
Read more
  • 0
  • 0
  • 10739

Packt
26 Dec 2014
11 min read
Save for later

Getting Started with XenServer®

Packt
26 Dec 2014
11 min read
This article is written by Martez Reed, the author of Mastering Citrix® XenServer®. One of the most important technologies in the information technology field today is virtualization. Virtualization is beginning to span every area of IT, including but not limited to servers, desktops, applications, network, and more. Our primary focus is server virtualization, specifically with Citrix XenServer 6.2. There are three major platforms in the server virtualization market: VMware's vSphere, Microsoft's Hyper-V, and Citrix's XenServer. In this article, we will cover the following topics: XenServer's overview XenServer's features What's new in Citrix XenServer 6.2 Planning and installing Citrix XenServer (For more resources related to this topic, see here.) Citrix® XenServer® Citrix XenServer is a type 1 or bare metal hypervisor. A bare metal hypervisor does not require an underlying host operating system. Type 1 hypervisors have direct access to the underlying hardware, which provides improved performance and guest compatibility. Citrix XenServer is based on the open source Xen hypervisor that is widely deployed in various industries and has a proven record of stability and performance. Citrix® XenCenter® Citrix XenCenter is a Windows-based application that provides a graphical user interface for managing the Citrix XenServer hosts from a single management interface. Features of Citrix® XenServer® The following section covers the features offered by Citrix XenServer: XenMotion/Live VM Migration: The XenMotion feature allows for running virtual machines to be migrated from one host to another without any downtime. XenMotion relocates the processor and memory instances of the virtual machine from one host to another, while the actual data and settings reside on the shared storage. This feature is pivotal in providing maximum uptime when performing maintenance or upgrades. This feature requires shared storage among the hosts. Storage XenMotion / Live Storage Migration: The Storage XenMotion feature provides functionality similar to that of XenMotion, but it is used to move a virtual machine's virtual disk from one storage repository to another without powering off the virtual machine. High Availability: High Availability automatically restarts the virtual machines on another host in the event of a host failure. This feature requires shared storage among the hosts. Resource pools: Resource pools are a collection of Citrix XenServer hosts grouped together to form a single pool of compute, memory, network, and storage resources that can be managed as a single entity. The resource pool allows the virtual machines to be started on any of the hosts and seamlessly moved between them. Active Directory integration: Citrix XenServer can be joined to a Windows Active Directory domain to provide centralized authentication for XenServer administrators. This eliminates the need for multiple independent administrator accounts on each XenServer host in a XenServer environment. Role-based access control (RBAC): RBAC is a feature that takes advantage of the Active Directory integration and allows administrators to define roles that have specific privileges associated with them. This allows administrative permissions to be segregated among different administrators. Open vSwitch: The default network backend for the Citrix XenServer 6.2 hypervisor is Open vSwitch. Open vSwitch is an open source multilayer virtual switch that brings advanced network functionality to the XenServer platform such as NetFlow, SPAN, OpenFlow, and enhanced Quality of Service (QoS). The Open vSwitch backend is also an integral component of the platform's support of software-defined networking (SDN). Dynamic Memory Control: Dynamic Memory Control allows XenServer to maximize the physical memory utilization by sharing unused physical memory among the guest virtual machines. If a virtual machine has been allocated 4 GB of memory and is only using 2 GB, the remaining memory can be shared with the other guest virtual machines. This feature provides a mechanism for memory oversubscription. IntelliCache: IntelliCache is a feature aimed at improving the performance of Citrix XenDesktop virtual desktops. IntelliCache creates a cache on a XenServer local storage repository, and as the virtual desktops perform read operations, the parent VM's virtual disk is copied to the cache. Write operations are also written to the local cache when nonpersistent or shared desktops are used. This mechanism reduces the load on the storage array by retrieving data from a local source for reads instead of the array. This is particularly beneficial when multiple desktops share the same parent image. This feature is only available with Citrix XenDesktop. Disaster Recovery: The XenServer Disaster Recovery feature provides a mechanism to recover the virtual machines and vApps in the event of the failure of an entire pool or site. Distributed Virtual Switch Controller (DVSC): DVSC provides centralized management and visibility of the networking in XenServer. Thin provisioning: Thin provisioning allows for a given amount of disk space to be allocated to virtual machines but only consume the amount that is actually being used by the guest operating system. This feature provides more efficient use of the underlying storage due to the on-demand consumption. What's new in Citrix® XenServer® 6.2 Citrix has added a number of new and exciting features in the latest version of XenServer: Open source New licensing model Improved guest support Open source Starting with Version 6.2, the Citrix XenServer hypervisor is now open sourced, but continues to be managed by Citrix Systems. The move to an open source model was the result of Citrix Systems' desire to further collaborate and integrate the XenServer product with its partners and the open source community. New licensing model The licensing model has been changed in Version 6.2, with the free version of the XenServer platform now providing full functionality, the previous advanced, enterprise, and platinum versions have been eliminated. Citrix will offer paid support for the free version of the XenServer hypervisor that will include the ability to install patches/updates using the XenCenter GUI, in addition to Citrix technical support. Improved guest support Version 6.2 has added official support for the following guest operating systems: Microsoft Windows 8 (full support) Microsoft Windows Server 2012 SUSE Linux Enterprise Server (SLES) 11 SP2 (32/64 bit) Red Hat Enterprise Linux (RHEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Oracle Enterprise Linux (OEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 CentOS (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Debian Wheezy (32/64 bit) VSS support for Windows Server 2008 R2 has been improved and reintroduced Citrix XenServer 6.2 Service Pack 1 adds support for the following operating systems: Microsoft Windows 8.1 Microsoft Windows Server 2012 R2 Retired features The following features have been removed from Version 6.2 of Citrix XenServer: Workload Balancing (WLB) SCOM integration Virtual Machine Protection Recovery (VMPR) Web Self Service XenConvert (this has been replaced by XenServer Conversion Manager) Deprecated features The following features will be removed from the future releases of Citrix XenServer. Citrix has reviewed the XenServer market and has determined that there are third-party products that are able to provide the product functionality more effectively: Microsoft System Center Virtual Machine Manager SCVMM support Integrated StorageLink Planning and Installing Citrix® XenServer® Installing Citrix XenServer is generally a simple and straightforward process that can be completed in 10 to 15 minutes. While the actual install is simple, there are several major decisions that need to be made prior to installing Citrix XenServer in order to ensure a successful deployment. Selecting the server hardware Typically, the first step is to select the server hardware that will be used. While the thought might be to just pick a server that fits our needs, we should also ensure that the hardware meets the documented system requirements. Checking the hardware against the Hardware Compatibility List (HCL) provided by Citrix Systems is advised to ensure that the system qualifies for Citrix support and that the system will properly run Citrix XenServer. The HCL provides a list of server models that have been verified to work with Citrix XenServer. The HCL can be found online at http://www.citrix.com/xenserver/hcl. Meeting the system requirements The following sections cover the minimal system requirements for Citrix XenServer 6.2. Processor requirements The following list covers the minimum requirements for the processor(s) to install Citrix XenServer 6.2: One or more 64-bit x86 CPU(s), 1.5 GHz minimum, 2 GHz or faster multicore CPU To support VMs running on Windows, an Intel VT or AMD-V 64-bit x86-based system with one or more CPU(s) is required Virtualization technology needs to be enabled in the BIOS Virtualization technology is disabled by default on many server platforms and needs to be manually enabled. Memory requirements The minimum memory requirement for installing Citrix XenServer 6.2 is 2 GB with a recommendation of 4 GB or more for production workloads. In addition to the memory usage of the guest virtual machines, the Xen hypervisor on the Control Domain (dom0) consumes the memory resources. The amount of resources consumed by the Control Domain (dom0) is based on the amount of physical memory in the host. Hard disk requirements The following are the minimum requirements for the hard disk(s) to install Citrix XenServer 6.2: 16 GB of free disk space minimum and 60 GB of free disk space is recommended Direct attached storage in the form of SATA, SAS, SCSI, or PATA interfaces are supported XenServer can be installed on a LUN presented from a storage area network (SAN) via a host bus adapter (HBA) in the XenServer host A physical HBA is required to boot XenServer from a SAN. Network card requirements 100 Mbps or a faster NIC is required for installing Citrix XenServer. One or more gigabit NICs is recommended for faster P2V, export/import data transfers, and VM live migrations. Installing Citrix® XenServer® 6.2 The following sections cover the installation of Citrix XenServer 6.2. Installation methods The Citrix XenServer 6.2 installer can be launched via two methods as listed: CD/DVD PXE or network boot Installation source There are several options where the Citrix XenServer installation files can be stored, and depending on the scenario, one would be preferred over another. Typically, the HTTP, FTP, or NFS option would be used when the installer is booted over the network via PXE or when a scripted installation is being performed. The installation sources are as follows: Local media (CD/DVD) HTTP or FTP NFS Supplemental packs Supplemental packs provide additional functionality to the XenServer platform through features such as enhanced hardware monitoring and third-party management software integration. The supplemental packs are typically downloaded from the vendor's website and are installed when prompted during the XenServer installation. XenServer® installation The following steps cover installing Citrix XenServer 6.2 from a CD: Boot the server from the Citrix XenServer 6.2 installation media and press Enter when prompted to start the Citrix XenServer 6.2 installer. Select the desired key mapping and select Ok to proceed. Press F9 if additional drivers need to be installed or select Ok to continue. Accept the EULA. Select the hard drive for the Citrix XenServer installation and choose Ok to proceed. Select the hard drive(s) to be used for storing the guest virtual machines and choose Ok to continue. You need to select the Enable thin provisioning (Optimized storage for XenDesktop) option to make use of the IntelliCache feature. Select the installation media source and select Ok to continue. Install supplemental packs if necessary and choose No to proceed. Select Verify installation source and select Ok to begin the verification. The installation media should be verified at least once to ensure that none of the installation files are corrupt. Choose Ok to continue after the verification has successfully completed. Provide and confirm a password for the root account and select Ok to proceed. Select the network interface to be used as the primary management interface and choose Ok to continue. Select the Static configuration option and provide the requested information. Choose Ok to continue. Enter the desired hostname and DNS server information. Select Ok to proceed. Select the appropriate geographical area to configure the time zone and select Ok to continue. Select the appropriate city or area to configure the time zone and select Ok to proceed. Select Using NTP or Manual time entry for the server to determine the local time and choose Ok to continue. Using NTP to synchronize the time of XenServer hosts in a pool is recommended to ensure that the time on all the hosts in the pool is synchronized. Enter the IP address or hostname of the desired NTP server(s) and select Ok to proceed. Select Install XenServer to start the installation. Click on Ok to restart the server after the installation has completed. The following screen should be presented after the reboot: Summary In this article, we covered an overview of Citrix XenServer along with the features that were available. We also looked at the new features that were added in XenServer 6.2 and then examined installing XenServer. Resources for Article: Further resources on this subject: Understanding Citrix®Provisioning Services 7.0 [article] Designing a XenDesktop® Site [article] Installation and Deployment of Citrix Systems®' CPSM [article]
Read more
  • 0
  • 0
  • 9093

article-image-metrics-vrealize-operations
Packt
26 Dec 2014
25 min read
Save for later

Metrics in vRealize Operations

Packt
26 Dec 2014
25 min read
 In this article by Iwan 'e1' Rahabok, author of VMware vRealize Operations Performance and Capacity Management, we will learn that vSphere 5.5 comes with many counters, many more than what a physical server provides. There are new counters that do not have a physical equivalent, such as memory ballooning, CPU latency, and vSphere replication. In addition, some counters have the same name as their physical world counterpart but behave differently in vSphere. Memory usage is a common one, resulting in confusion among system administrators. For those counters that are similar to their physical world counterparts, vSphere may use different units, such as milliseconds. (For more resources related to this topic, see here.) As a result, experienced IT administrators find it hard to master vSphere counters by building on their existing knowledge. Instead of trying to relate each counter to its physical equivalent, I find it useful to group them according to their purpose. Virtualization formalizes the relationship between the infrastructure team and application team. The infrastructure team changes from the system builder to service provider. The application team no longer owns the physical infrastructure. The application team becomes a consumer of a shared service—the virtual platform. Depending on the Service Level Agreement (SLA), the application team can be served as if they have dedicated access to the infrastructure, or they can take a performance hit in exchange for a lower price. For SLAs where performance matters, the VM running in the cluster should not be impacted by any other VMs. The performance must be as good as if it is the only VM running in the ESXi. Because there are two different counter users, there are two different purposes. The application team (developers and the VM owner) only cares about their own VM. The infrastructure team has to care about both the VM and infrastructure, especially when they need to show that the shared infrastructure is not a bottleneck. One set of counters is to monitor the VM; the other set is to monitor the infrastructure. The following diagram shows the two different purposes and what we should check for each. By knowing what matters on each layer, we can better manage the virtual environment. The two-tier IT organization At the VM layer, we care whether the VM is being served well by the platform. Other VMs are irrelevant from the VM owner's point of view. A VM owner only wants to make sure his or her VM is not contending for a resource. So the key counter here is contention. Only when we are satisfied that there is no contention can we proceed to check whether the VM is sized correctly or not. Most people check for utilization first because that is what they are used to monitoring in the physical infrastructure. In a virtual environment, we should check for contention first. At the infrastructure layer, we care whether it serves everyone well. Make sure that there is no contention for resource among all the VMs in the platform. Only when the infrastructure is clear from contention can we troubleshoot a particular VM. If the infrastructure is having a hard time serving majority of the VMs, there is no point troubleshooting a particular VM. This two-layer concept is also implemented by vSphere in compute and storage architectures. For example, there are two distinct layers of memory in vSphere. There is the individual VM memory provided by the hypervisor and there is the physical memory at the host level. For an individual VM, we care whether the VM is getting enough memory. At the host level, we care whether the host has enough memory for everyone. Because of the difference in goals, we look for a different set of counters. In the previous diagram, there are two numbers shown in a large font, indicating that there are two main steps in monitoring. Each step applies to each layer (the VM layer and infrastructure layer), so there are two numbers for each step. Step 1 is used for performance management. It is useful during troubleshooting or when checking whether we are meeting performance SLAs or not. Step 2 is used for capacity management. It is useful as part of long-term capacity planning. The time period for step 2 is typically 3 months, as we are checking for overall utilization and not a one off spike. With the preceding concept in mind, we are ready to dive into more detail. Let's cover compute, network, and storage. Compute The following diagram shows how a VM gets its resource from ESXi. It is a pretty complex diagram, so let me walk you through it. The tall rectangular area represents a VM. Say this VM is given 8 GB of virtual RAM. The bottom line represents 0 GB and the top line represents 8 GB. The VM is configured with 8 GB RAM. We call this Provisioned. This is what the Guest OS sees, so if it is running Windows, you will see 8 GB RAM when you log into Windows. Unlike a physical server, you can configure a Limit and a Reservation. This is done outside the Guest OS, so Windows or Linux does not know. You should minimize the use of Limit and Reservation as it makes the operation more complex. Entitlement means what the VM is entitled to. In this example, the hypervisor entitles the VM to a certain amount of memory. I did not show a solid line and used an italic font style to mark that Entitlement is not a fixed value, but a dynamic value determined by the hypervisor. It varies every minute, determined by Limit, Entitlement, and Reservation of the VM itself and any shared allocation with other VMs running on the same host. Obviously, a VM can only use what it is entitled to at any given point of time, so the Usage counter does not go higher than the Entitlement counter. The green line shows that Usage ranges from 0 to the Entitlement value. In a healthy environment, the ESXi host has enough resources to meet the demands of all the VMs on it with sufficient overhead. In this case, you will see that the Entitlement, Usage, and Demand counters will be similar to one another when the VM is highly utilized. This is shown by the green line where Demand stops at Usage, and Usage stops at Entitlement. The numerical value may not be identical because vCenter reports Usage in percentage, and it is an average value of the sample period. vCenter reports Entitlement in MHz and it takes the latest value in the sample period. It reports Demand in MHz and it is an average value of the sample period. This also explains why you may see Usage a bit higher than Entitlement in highly-utilized vCPU. If the VM has low utilization, you will see the Entitlement counter is much higher than Usage. An environment in which the ESXi host is resource constrained is unhealthy. It cannot give every VM the resources they ask for. The VMs demand more than they are entitled to use, so the Usage and Entitlement counters will be lower than the Demand counter. The Demand counter can go higher than Limit naturally. For example, if a VM is limited to 2 GB of RAM and it wants to use 14 GB, then Demand will exceed Limit. Obviously, Demand cannot exceed Provisioned. This is why the red line stops at Provisioned because that is as high as it can go. The difference between what the VM demands and what it gets to use is the Contention counter. Contention is Demand minus Usage. So if the Contention is 0, the VM can use everything it demands. This is the ultimate goal, as performance will match the physical world. This Contention value is useful to demonstrate that the infrastructure provides a good service to the application team. If a VM owner comes to see you and says that your shared infrastructure is unable to serve his or her VM well, both of you can check the Contention counter. The Contention counter should become a part of your SLA or Key Performance Indicator (KPI). It is not sufficient to track utilization alone. When there is contention, it is possible that both your VM and ESXi host have low utilization, and yet your customers (VMs running on that host) perform poorly. This typically happens when the VMs are relatively large compared to the ESXi host. Let me give you a simple example to illustrate this. The ESXi host has two sockets and 20 cores. Hyper-threading is not enabled to keep this example simple. You run just 2 VMs, but each VM has 11 vCPUs. As a result, they will not be able to run concurrently. The hypervisor will schedule them sequentially as there are only 20 physical cores to serve 22 vCPUs. Here, both VMs will experience high contention. Hold on! You might say, "There is no Contention counter in vSphere and no memory Demand counter either." This is where vRealize Operations comes in. It does not just regurgitate the values in vCenter. It has implicit knowledge of vSphere and a set of derived counters with formulae that leverage that knowledge. You need to have an understanding of how the vSphere CPU scheduler works. The following diagram shows the various states that a VM can be in: The preceding diagram is taken from The CPU Scheduler in VMware vSphere® 5.1: Performance Study (you can find it at http://www.vmware.com/resources/techresources/10345). This is a whitepaper that documents the CPU scheduler with a good amount of depth for VMware administrators. I highly recommend you read this paper as it will help you explain to your customers (the application team) how your shared infrastructure juggles all those VMs at the same time. It will also help you pick the right counters when you create your custom dashboards in vRealize Operations. Storage If you look at the ESXi and VM metric groups for storage in the vCenter performance chart, it is not clear how they relate to one another at first glance. You have storage network, storage adapter, storage path, datastore, and disk metric groups that you need to check. How do they impact on one another? I have created the following diagram to explain the relationship. The beige boxes are what you are likely to be familiar with. You have your ESXi host, and it can have NFS Datastore, VMFS Datastore, or RDM objects. The blue colored boxes represent the metric groups. From ESXi to disk NFS and VMFS datastores differ drastically in terms of counters, as NFS is file-based while VMFS is block-based. For NFS, it uses the vmnic, and so the adapter type (FC, FCoE, or iSCSI) is not applicable. Multipathing is handled by the network, so you don't see it in the storage layer. For VMFS or RDM, you have more detailed visibility of the storage. To start off, each ESXi adapter is visible and you can check the counters for each of them. In terms of relationship, one adapter can have many devices (disk or CDROM). One device is typically accessed via two storage adapters (for availability and load balancing), and it is also accessed via two paths per adapter, with the paths diverging at the storage switch. A single path, which will come from a specific adapter, can naturally connect one adapter to one device. The following diagram shows the four paths: Paths from ESXi to storage A storage path takes data from ESXi to the LUN (the term used by vSphere is Disk), not to the datastore. So if the datastore has multiple extents, there are four paths per extent. This is one reason why I did not use more than one extent, as each extent adds four paths. If you are not familiar with extent, Cormac Hogan explains it well on this blog post: http://blogs.vmware.com/vsphere/2012/02/vmfs-extents-are-they-bad-or-simply-misunderstood.html For VMFS, you can see the same counters at both the Datastore level and the Disk level. Their value will be identical if you follow the recommended configuration to create a 1:1 relationship between a datastore and a LUN. This means you present an entire LUN to a datastore (use all of its capacity). The following screenshot shows how we manage the ESXi storage. Click on the ESXi you need to manage, select the Manage tab, and then the Storage subtab. In this subtab, we can see the adapters, devices, and the host cache. The screen shows an ESXi host with the list of its adapters. I have selected vmhba2, which is an FC HBA. Notice that it is connected to 5 devices. Each device has 4 paths, so I have 20 paths in total. ESXi adapter Let's move on to the Storage Devices tab. The following screenshot shows the list of devices. Because NFS is not a disk, it does not appear here. I have selected one of the devices to show its properties. ESXi device If you click on the Paths tab, you will be presented with the information shown in the next screenshot, including whether a path is active. Note that not all paths carry I/O; it depends on your configuration and multipathing software. Because each LUN typically has four paths, path management can be complicated if you have many LUNs. ESXi paths The story is quite different on the VM layer. A VM does not see the underlying shared storage. It sees local disks only. So regardless of whether the underlying storage is NFS, VMFS, or RDM, it sees all of them as virtual disks. You lose visibility in the physical adapter (for example, you cannot tell how many IOPSs on vmhba2 are coming from a particular VM) and physical paths (for example, how many disk commands travelling on that path are coming from a particular VM). You can, however, see the impact at the Datastore level and the physical Disk level. The Datastore counter is especially useful. For example, if you notice that your IOPS is higher at the Datastore level than at the virtual Disk level, this means you have a snapshot. The snapshot IO is not visible at the virtual Disk level as the snapshot is stored on a different virtual disk. From VM to disk Counters in vCenter and vRealize Operations We compared the metric groups between vCenter and vRealize Operations. We know that vRealize Operations provides a lot more detail, especially for larger objects such as vCenter, data center, and cluster. It also provides information about the distributed switch, which is not displayed in vCenter at all. This makes it useful for the big-picture analysis. We will now look at individual counters. To give us a two-dimensional analysis, I would not approach it from the vSphere objects' point of view. Instead, we will examine the four key types of metrics (CPU, RAM, network, and storage). For each type, I will provide my personal take on what I think is a good guidance for their value. For example, I will give guidance on a good value for CPU contention based on what I have seen in the field. This is not an official VMware recommendation. I will state the official recommendation or popular recommendation if I am aware of it. You should spend time understanding vCenter counters and esxtop counters. This section of the article is not meant to replace the manual. I would encourage you to read the vSphere documentation on this topic, as it gives you the required foundation while working with vRealize Operations. The following are the links to this topic: The link for vSphere 5.5 is http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.monitoring.doc/GUID-12B1493A-5657-4BB3-8935-44B6B8E8B67C.html. If this link does not work, visit https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html and then navigate to ESXi and vCenter Server 5.5 Documentation | vSphere Monitoring and Performance | Monitoring Inventory Objects with Performance Charts. The counters are documented in the vSphere API. You can find it at http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.PerformanceManager.html. If this link has changed and no longer works, open the vSphere online documentation and navigate to vSphere API/SDK Documentation | vSphere Management SDK | vSphere Web Services SDK Documentation | VMware vSphere API Reference | Managed Object Types | P. Here, choose Performance Manager from the list under the letter P. The esxtop manual provides good information on the counters. You can find it at https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html. You should also be familiar with the architecture of ESXi, especially how the scheduler works. vCenter has a different collection interval (sampling period) depending upon the timeline you are looking at. Most of the time you are looking at the real-time statistic (chart), as other timelines do not have enough counters. You will notice right away that most of the counters become unavailable once you choose a timeline. In the real-time chart, each data point has 20 seconds' worth of data. That is as accurate as it gets in vCenter. Because all other performance management tools (including vRealize Operations) get their data from vCenter, they are not getting anything more granular than this. As mentioned previously, esxtop allows you to sample down to a minimum of 2 seconds. Speaking of esxtop, you should be aware that not all counters are exposed in vCenter. For example, if you turn on 3D graphics, there is a separate SVGA thread created for that VM. This can consume CPU and it will not show up in vCenter. The Mouse, Keyboard, Screen (MKS) threads, which give you the console, also do not show up in vCenter. The next screenshot shows how you lose most of your counters if you choose a timespan other than real time. In the case of CPU, you are basically left with two counters, as Usage and Usage in MHz cover the same thing. You also lose the ability to monitor per core, as the target objects only list the host now and not the individual cores. Counters are lost beyond 1 hour Because the real-time timespan only lasts for 1 hour, the performance troubleshooting has to be done at the present moment. If the performance issue cannot be recreated, there is no way to troubleshoot in vCenter. This is where vRealize Operations comes in, as it keeps your data for a much longer period. I was able to perform troubleshooting for a client on a problem that occurred more than a month ago! vRealize Operations takes data every 5 minutes. This means it is not suitable to troubleshoot performance that does not last for 5 minutes. In fact, if the performance issue only lasts for 5 minutes, you may not get any alert, because the collection may happen exactly in the middle of those 5 minutes. For example, let's assume the CPU is idle from 08:00:00 to 08:02:30, spikes from 08:02:30 to 08:07:30, and then again is idle from 08:07:30 to 08:10:00. If vRealize Operations is collecting at exactly 08:00, 08:05, and 08:10, you will not see the spike as it is spread over two data points. This means, for vRealize Operations to pick up the spike in its entirety without any idle data, the spike has to last for 10 minutes or more. In some metrics, the unit is actually 20 seconds. vRealize Operations averages a set of 20-second data points into a single 5-minute data point. The Rollups column is important. Average means the average of 5 minutes in the case of vRealize Operations. The summation value is actually an average for those counters where accumulation makes more sense. An example is CPU Ready time. It gets accumulated over the sampling period. Over a period of 20 seconds, a VM may accumulate 200 milliseconds of CPU ready time. This translates into 1 percent, which is why I said it is similar to average, as you lose the peak. Latest, on the other hand, is different. It takes the last value of the sampling period. For example, in the sampling for 20 seconds, it takes the value between 19 and 20 seconds. This value can be lower or higher than the average of the entire 20-second period. So what is missing here is the peak of the sampling period. In the 5-minute period, vRealize Operations does not collect low, average, and high from vCenter. It takes average only. Let's talk about the Units column now. Some common units are milliseconds, MHz, percent, KBps, and KB. Some counters are shown in MHz, which means you need to know your ESXi physical CPU frequency. This can be difficult due to CPU power saving features, which lower the CPU frequency when the demand is low. In large environments, this can be operationally difficult as you have different ESXi hosts from different generations (and hence, are likely to sport a different GHz). This is also the reason why I state that the cluster is the smallest logical building block. If your cluster has ESXi hosts with different frequencies, these MHz-based counters can be difficult to use, as the VMs get vMotion-ed by DRS. vRealize Operations versus vCenter I mentioned earlier that vRealize Operations does not simply regurgitate what vCenter has. Some of the vSphere-specific characteristics are not properly understood by traditional management tools. Partial understanding can lead to misunderstanding. vRealize Operations starts by fully understanding the unique behavior of vSphere, then simplifying it by consolidating and standardizing the counters. For example, vRealize Operations creates derived counters such as Contention and Workload, then applies them to CPU, RAM, disk, and network. Let's take a look at one example of how partial information can be misleading in a troubleshooting scenario. It is common for customers to invest in an ESXi host with plenty of RAM. I've seen hosts with 256 to 512 GB of RAM. One reason behind this is the way vCenter displays information. In the following screenshot, vCenter is giving me an alert. The host is running on high memory utilization. I'm not showing the other host, but you can see that it has a warning, as it is high too. The screenshots are all from vCenter 5.0 and vCenter Operations 5.7, but the behavior is still the same in vCenter 5.5 Update 2 and vRealize Operations 6.0. vSphere 5.0 – Memory alarm I'm using vSphere 5.0 and vCenter Operations 5.x to show the screenshots as I want to provide an example of the point I stated earlier, which is the rapid change of vCloud Suite. The first step is to check if someone has modified the alarm by reducing the threshold. The next screenshot shows that utilization above 95 percent will trigger an alert, while utilization above 90 percent will trigger a warning. The threshold has to be breached by at least 5 minutes. The alarm is set to a suitably high configuration, so we will assume the alert is genuinely indicating a high utilization on the host. vSphere 5.0 – Alarm settings Let's verify the memory utilization. I'm checking both the hosts as there are two of them in the cluster. Both are indeed high. The utilization for vmsgesxi006 has gone down in the time taken to review the Alarm Settings tab and move to this view, so both hosts are now in the Warning status. vSphere 5.0 – Hosts tab Now we will look at the vmsgesxi006 specification. From the following screenshot, we can see it has 32 GB of physical RAM, and RAM usage is 30747 MB. It is at 93.8 percent utilization. vSphere – Host's summary page Since all the numbers shown in the preceding screenshot are refreshed within minutes, we need to check with a longer timeline to make sure this is not a one-time spike. So let's check for the last 24 hours. The next screenshot shows that the utilization was indeed consistently high. For the entire 24-hour period, it has consistently been above 92.5 percent, and it hits 95 percent several times. So this ESXi host was indeed in need of more RAM. Deciding whether to add more RAM is complex; there are many factors to be considered. There will be downtime on the host, and you need to do it for every host in the cluster since you need to maintain a consistent build cluster-wide. Because the ESXi is highly utilized, I should increase the RAM significantly so that I can support more VMs or larger VMs. Buying bigger DIMMs may mean throwing away the existing DIMMs, as there are rules restricting the mixing of DIMMs. Mixing DIMMs also increases management complexity. The new DIMM may require a BIOS update, which may trigger a change request. Alternatively, the large DIMM may not be compatible with the existing host, in which case I have to buy a new box. So a RAM upgrade may trigger a host upgrade, which is a larger project. Before jumping in to a procurement cycle to buy more RAM, let's double-check our findings. It is important to ask what is the host used for? and who is using it?. In this example scenario, we examined a lab environment, the VMware ASEAN lab. Let's check out the memory utilization again, this time with the context in mind. The preceding graph shows high memory utilization over a 24-hour period, yet no one was using the lab in the early hours of the morning! I am aware of this as I am the lab administrator. We will now turn to vCenter Operations for an alternative view. The following screenshot from vCenter Operations 5 tells a different story. CPU, RAM, disk, and network are all in the healthy range. Specifically for RAM, it has 97 percent utilization but 32 percent demand. Note that the Memory chart is divided into two parts. The upper one is at the ESXi level, while the lower one shows individual VMs in that host. The upper part is in turn split into two. The green rectangle (Demand) sits on top of a grey rectangle (Usage). The green rectangle shows a healthy figure at around 10 GB. The grey rectangle is much longer, almost filling the entire area. The lower part shows the hypervisor and the VMs' memory utilization. Each little green box represents one VM. On the bottom left, note the KEY METRICS section. vCenter Operations 5 shows that Memory | Contention is 0 percent. This means none of the VMs running on the host is contending for memory. They are all being served well! vCenter Operations 5 – Host's details page I shared earlier that the behavior remains the same in vCenter 5.5. So, let's take a look at how memory utilization is shown in vCenter 5.5. The next screenshot shows the counters provided by vCenter 5.5. This is from a different ESXi host, as I want to provide you with a second example. Notice that the ballooning is 0, so there is no memory pressure for this host. This host has 48 GB of RAM. About 26 GB has been mapped to VM or VMkernel, which is shown by the Consumed counter (the highest line in the chart; notice that the value is almost constant). The Usage counter shows 52 percent because it takes from Consumed. The active memory is a lot lower, as you can see from the line at the bottom. Notice that the line is not a simple straight line, as the value goes up and down. This proves that the Usage counter is actually the Consumed counter. vCenter 5.5 Update 1 memory counters At this point, some readers might wonder whether that's a bug in vCenter. No, it is not. There are situations in which you want to use the consumed memory and not the active memory. In fact, some applications may not run properly if you use active memory. Also, technically, it is not a bug as the data it gives is correct. It is just that additional data will give a more complete picture since we are at the ESXi level and not at the VM level. vRealize Operations distinguishes between the active memory and consumed memory and provides both types of data. vCenter uses the Consumed counter for utilization for the ESXi host. As you will see later in this article, vCenter uses the Active counter for utilization for VM. So the Usage counter has a different formula in vCenter depending upon the object. This makes sense as they are at different levels. vRealize Operations uses the Active counter for utilization. Just because a physical DIMM on the motherboard is mapped to a virtual DIMM in the VM, it does not mean it is actively used (read or write). You can use that DIMM for other VMs and you will not incur (for practical purposes) performance degradation. It is common for Microsoft Windows to initialize pages upon boot with zeroes, but never use them subsequently. For further information on this topic, I would recommend reviewing Kit Colbert's presentation on Memory in vSphere at VMworld, 2012. The content is still relevant for vSphere 5.x. The title is Understanding Virtualized Memory Performance Management and the session ID is INF-VSP1729. You can find it at http://www.vmworld.com/docs/DOC-6292. If the link has changed, the link to the full list of VMworld 2012 sessions is http://www.vmworld.com/community/sessions/2012/. Not all performance management tools understand this vCenter-specific characteristic. They would have given you a recommendation to buy more RAM. Summary In this article, we covered the world of counters in vCenter and vRealize Operations. The counters were analyzed based on their four main groupings (CPU, RAM, disk, and network). We also covered each of the metric groups, which maps to the corresponding objects in vCenter. For the counters, we also shared how they are related, and how they differ. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] An Introduction to VMware Horizon Mirage [Article]
Read more
  • 0
  • 0
  • 11746

article-image-application-connectivity-and-network-events
Packt
26 Dec 2014
10 min read
Save for later

Application Connectivity and Network Events

Packt
26 Dec 2014
10 min read
 In this article by Kerri Shotts, author of PhoneGap for Enterprise, we will see how an app reacts to the network changes and activities. In an increasingly connected world, mobile devices aren't always connected to the network. As such, the app needs to be sensitive to changes in the device's network connectivity. It also needs to be sensitive to the type of network (for example, cellular versus wired), not to mention being sensitive to the device the app itself is running on. Given all this, we will cover the following topics: Determining network connectivity Getting the current network type Detecting changes in connectivity Handling connectivity issues (For more resources related to this topic, see here.) Determining network connectivity In a perfect world, we'd never have to worry if the device was connected to the Internet or not, and if our backend was reachable. Of course, we don't live in that world, so we need to respond appropriately when the device's network connectivity changes. What's critical to remember is that having a network connection in no way determines the reachability of a host. That is to say, it's entirely possible for a device to be connected to a Wi-Fi network or a mobile hotspot and yet is unable to contact your servers. This can happen for several reasons (any of which can prevent proper communication with your backend). In short, determining the network status and being sensitive to changes in the status really tells you only one thing: whether or not it is futile to attempt communication. After all, if the device isn't connected to any network, there's no reason to attempt communication over a nonexistent network. On the other hand, if a network is available, the only way to determine if your hosts are reachable or not is to try and contact them. The ability to determine the device's network connectivity and respond to changes in the status is not available in Cordova/PhoneGap by default. You'll need to add a plugin before you can use this particular feature. You can install the plugin as follows: cordova plugin add org.apache.cordova.network-information The plugin's complete documentation is available at: https://github.com/apache/cordova-plugin-network-information/blob/master/doc/index.md. Getting the current network type Anytime after the deviceready event fires, you can query the plugin for the status of the current network connection by querying navigator.connection.type: var networkType = navigator.connection.type; switch (networkType) { case Connection.UNKNOWN: console.log ("Unknown connection."); break; case Connection.ETHERNET: console.log ("Ethernet connection."); break; case Connection.WIFI: console.log ("Wi-Fi connection."); break; case Connection.CELL_2G: console.log ( "Cellular (2G) connection."); break; case Connection.CELL_3G: console.log ( "Cellular (3G) connection."); break; case Connection.CELL_4G: console.log ( "Cellular (4G) connection."); break; case Connection.CELL: console.log ( "Cellular connection."); break; case Connection.NONE: console.log ( "No network connection."); break; } If you executed the preceding code on a typical mobile device, you'd probably either see some variation of the Cellular connection or the Wi-Fi connection message. If your device was on Wi-Fi and you proceeded to disable it and rerun the app, the Wi-Fi notice will be replaced with the Cellular connection notice. Now, if you put the device into airplane mode and rerun the app, you should see No network connection. Based on the available network type constants, it's clear that we can use this information in various ways: We can tell if it makes sense to attempt a network request: if the type is Connection.NONE, there's no point in trying as there's no network to service the request. We can tell if we are on a wired network, a Wi-Fi network, or a cellular network. Consider a streaming video app; this app can not only permit full quality video on a wired/Wi-Fi network, but can also use a lower quality video stream if it was running on a cellular connection. Although tempting, there's one thing the earlier code does not tell us: the speed of the network. That is, we can't use the type of the network as a proxy for the available bandwidth, even though it feels like we can. After all, aren't Ethernet connections typically faster than Wi-Fi connections? Also, isn't a 4G cellular connection faster than a 2G connection? In ideal circumstances, you'd be right. Unfortunately, it's possible for a fast 4G cellular network to be very congested, thus resulting in poor throughput. Likewise, it is possible for an Ethernet connection to communicate over a noisy wire and interact with a heavily congested network. This can also slow throughput. Also, while it's important to recognize that although you can learn something about the network the device is connected to, you can't use this to learn anything about the network conditions beyond that network. The device might indicate that it is attached to a Wi-Fi network, but this Wi-Fi network might actually be a mobile hotspot. It could be connected to a satellite with high latency, or to a blazing fast fiber network. As such, the only two things we can know for sure is whether or not it makes sense to attempt a request, and whether or not we need to limit the bandwidth if the device knows it is on a cellular connection. That's it. Any other use of this information is an abuse of the plugin, and is likely to cause undesirable behavior. Detecting changes in connectivity Determining the type of network connection once does little good as the device can lose the connection or join a new network at any time. This means that we need to properly respond to these events in order to provide a good user experience. Do not rely on the following events being fired when your app starts up for the first time. On some devices, it might take several seconds for the first event to fire; however, in some cases, the events might never fire (specifically, if testing in a simulator). There are two events our app needs to listen to: the online event and the offline event. Their names are indicative of their function, so chances are good you already know what they do. The online event is fired when the device connects to a network, assuming it wasn't connected to a network before. The offline event does the opposite: it is fired when the device loses a connection to a network, but only if the device was previously connected to a network. This means that you can't depend on these events to detect changes in the type of the network: a move from a Wi-Fi network to a cellular network might not elicit any events at all. In order to listen to these events, you can use the following code: document.addEventListener ("online", handleOnlineEvent, false); document.addEventListener ("offline", handleOfflineEvent, false); The event listener doesn't receive any information, so you'll almost certainly want to check the network type when handling an online event. The offline event will always correspond to a Connection.NONE network type. Having the ability to detect changes in the connectivity status means that our app can be more intelligent about how it handles network requests, but it doesn't tell us if a request is guaranteed to succeed. Handling connectivity issues As the only way to know if a network request might succeed is to actually attempt the request; we need to know how to properly handle the errors that might rise out of such an attempt. Between the Mobile and the Middle tier, the following are the possible errors that you might encounter while connecting to a network: TimeoutError: This error is thrown when the XHR times out. (Default is 30 seconds for our wrapper, but if the XHR's timeout isn't otherwise set, it will attempt to wait forever.) HTTPError: This error is thrown when the XHR completes and receives a response other than 200 OK. This can indicate any number of problems, but it does not indicate a network connectivity issue. JSONError: This error is thrown when the XHR completes, but the JSON response from the server cannot be parsed. Something is clearly wrong on the server, of course, but this does not indicate a connectivity issue. XHRError: This error is thrown when an error occurs when executing the XHR. This is definitely indicative of something going very wrong (not necessarily a connectivity issue, but there's a good chance). MaxRetryAttemptsReached: This error is thrown when the XHR wrapper has given up retrying the request. The wrapper automatically retries in the case of TimeoutError and XHRError. In all the earlier cases, the catch method in the promise chain is called. At this point, you can attempt to determine the type of error in order to determine what to do next: function sendFailRequest() { XHR.send( "GET", "http://www.really-bad-host-name.com /this/will/fail" ) .then(function( response ) {    console.log( response ); }) .catch( function( err ) {    if ( err instanceof XHR.XHRError ||     err instanceof XHR.TimeoutError ||     err instanceof XHR.MaxRetryAttemptsReached ) {      if ( navigator.connection.type === Connection.NONE ) {        // we could try again once we have a network connection        var retryRequest = function() {          sendFailRequest();          APP.removeGlobalEventListener( "networkOnline",         retryRequest );        };        // wait for the network to come online – we'll cover       this method in a moment        APP.addGlobalEventListener( "networkOnline",       retryRequest );      } else {        // we have a connection, but can't get through       something's going on that we can't fix.        alert( "Notice: can't connect to the server." );      }    }    if ( err instanceof XHR.HTTPError ) {      switch ( err.HTTPStatus ) {      case 401: // unauthorized, log the user back in        break;        case 403: // forbidden, user doesn't have access        break;        case 404: // not found        break;        case 500: // internal server error        break;        default:       console.log( "unhandled error: ", err.HTTPStatus );      }    }    if ( err instanceof XHR.JSONParseError ) {      console.log( "Issue parsing XHR response from server." );    } }).done(); } sendFailRequest(); Once a connection error is encountered, it's largely up to you and the type of app you are building to determine what to do next, but there are several options to consider as your next course of action: Fail loudly and let the user know that their last action failed. It might not be terribly great for user experience, but it might be the only sensible thing to do. Check whether there is a network connection present, and if not, hold on to the request until an online event is received and then send the request again. This makes sense only if the request you are sending is a request for data, not a request for changing data, as the data might have changed in the interim. Summary In this article you learnt how an app built using PhoneGap/Cordova reacts to the changing network conditions, also how to handle the connectivity issues that you might encounter. Resources for Article: Further resources on this subject: Configuring the ChildBrowser plugin [article] Using Location Data with PhoneGap [article] Working with the sharing plugin [article]
Read more
  • 0
  • 0
  • 4282
article-image-integrating-inventory-and-ledger-modules
Packt
26 Dec 2014
17 min read
Save for later

Integrating with the Inventory and Ledger Modules

Packt
26 Dec 2014
17 min read
If you have come this far, you are most likely ready to figure out how to change or add functionality in standard AX. This article, written by Mohammed Rasheed, the author of Learning MS Dynamics AX 2012 Programming, takes you through some of the challenges you will face when trying to modify how some of the standard modules in AX behave. We will look at some typical problems that you might face when integrating to standard AX in the main modules of Dynamics AX: Inventory Ledger (For more resources related to this topic, see here.) The inventory module The inventory module in AX contains the storage and setup of items and explains how they are stored in the physical inventory. It is also heavily integrated to most of the other modules; the production module has to know which items to produce and which items the production consists of. To create a sales order in the Accounts Receivable module, you need to know which items to sell and so on. The InventTable entity schema The main entity of the inventory, as you have figured out probably, is the Product Information Management module. There are two main concepts that you need to understand—the concept of product and item. A product can be considered to be a definition (dare I call it a template) of an item. The product table is a shared table (which means it is not company specific) and it holds basic details about the product. A product is then released into a company where it becomes an item (also referred to as Released product). The following two diagrams published by Microsoft illustrate the product data structure and key relations: Not all fields have been captured in these diagrams. The following table gives a brief description of each of the tables shown in the preceding InventTable entity schema: Table name Description EcoResProduct This is a base table for the inherited tables that follow next. EcoResProductMaster This table holds information on products that can have variants (color, size, and so on.). Within the application, these products are referred to as product masters. EcoResDistinctProduct A distinct product is effectively a product that does not have variants (it is not associated with inventory dimensions color, size, and so on). InventTable This table contains information about the released items. InventItemGroup This table contains information about item groups. InventDimGroup This table contains information about a dimension group. InventTableModule This table contains information about purchase, sales and inventory specific settings for items. InventModelGroup This table contains information about inventory model groups. InventItemSalesSetup This table contains default settings for items, such as site and warehouse. The values are related to sales settings. InventItemPurchSetup This table contains default settings for items, such as site and warehouse. The values are related to purchase settings. InventItemInventSetup This table contains default settings for items, such as site and warehouse. The values are related to inventory settings. InventItemLocation This table contains information about items and related warehouse, and counting settings. The settings can be made specific based on the items configuration and vary from warehouse to warehouse. InventDim This table contains values for inventory dimensions. The table description in the list below each entity schema in this article is taken from the Dynamics AX SDK, also known as the Developer Help. As you can see, this diagram is very simplified as I have left out most of the fields in the tables to make the preceding diagram more readable. The purpose of the diagram is to give you an idea of the relations between these tables. The diagram shows that items have tables linked to them that describe their behavior in purchase, inventory and sales, and how the items should be treated with regards to financial posting and inventory posting. The InventTrans entity schema The InventTransaction schema is another important in the inventory module. Again, this is very simplified with only the main entities and the fields that make up the primary and foreign keys, as shown in the diagram from a Microsoft whitepaper The following table gives a brief description of each of the tables shown in the preceding InventTrans entity schema: Table name Description InventTransOrigin This was a new table added in AX 2012 and is effectively used to join InventTrans table to the origination (sales line, purchase lines, inventory adjustment, and so on) transaction line via the InventTransOrigin tables. InventTransOrigin tables There are multiple tables such as InventTransOriginSalesline and InventTransOriginPurchLine, and they are fundamentally used to link the transaction origin document to an inventory transaction record. InventTrans This table contains information about inventory transactions. When order lines, such as sales order lines or purchase order lines are created, they generate related records in this table. These records represent the flow of material that goes in and out of the inventory. InventTable This table contains information about inventory items. InventSum This table contains information about the present and expected on-hand stock of items. Expected on-hand stock is calculated by looking at the present on-hand values and adding whatever is on order (has been purchased but not arrived yet). InventDim This table contains values of inventory dimensions. VendTable This table contains information on vendors for Accounts Payable. CustTable This table contains the list of customers for Accounts Receivable and customer relationship management. Understanding main class hierarchies In addition to the entity schemas, it is important to know a little about the main class hierarchies within the Inventory module. We use the classes that the main class hierarchies consist of to validate, prepare, create, and change transactions. The InventMovement classes The InventMovement classes are used to validate and prepare data that will be used to generate inventory transactions. The super class in the hierarchy is the abstract class called InventMovement. All of the other classes in the hierarchy are prefixed InventMov_. For example, InventMov_Sales is used to validate and prepare inventory with sales line transactions, or InventMov_Transfer, which is used when dealing with inventory transfer journals. The following figure is taken from the type hierarchy browser of the InventMovement class and shows the whole class hierarchy under InventMovement: The InventUpdate classes The InventUpdate classes are used to insert and update inventory transactions. Whenever a transaction should be posted, the updateNow() method in the correct InventUpdate subclass will execute. The super class in the hierarchy is the InventUpdate class and the other classes in the hierarchy are prefixed InventUpd_. For example, InventUpd_Estimated is used whenever an item line is entered in the system that will most likely generate a physical transaction in future. This can be, for example, a sales order line that has the on order sales status. When a line like this is entered or generated in AX, the InventUpd_Estimated class is triggered so that the inventory transactions will reflect that the item is on order. The following figure shows the type hierarchy browser of the InventUpdate class and its subclasses: The InventAdj classes Whenever an adjustment to an inventory transaction takes place, the InventAdj classes are used. The adjustments typically occur when you are closing the inventory. The following figure shows the application hierarchy tree of the InventAdj class and its subclasses: The InventSum classes The InventSum classes are used to find the on-hand information about a certain item at a certain date. The InventOnHand class is used to find the current on-hand information. The InventSum classes are not structured in a hierarchy as the previously mentioned classes. Now that you have seen how some of the main tables in the inventory module are related to each other, we will discuss how to write code that integrates with the inventory module. Working with inventory dimensions One thing that you will have to learn sooner rather than later is how the inventory dimensions work. Basically, the InventDim table holds information about all the different dimensions that are related to an item. These dimensions can be divided into three types: item, storage, and tracking dimensions. By default, AX comes with the following dimensions: Four item dimension: The dimensions are Color, Size, Style, and Configuration Six storage dimension: The dimensions are Site, Warehouse, Location, Pallet, Inventory status, and License plate Five tracking dimension: The dimensions are Batch, Serial, Owner, Inventory Profile, and GTD number The number of dimensions visible within the application will vary based on the configuration keys enabled. Finding an inventory dimension When combinations of these dimensions are needed in a journal or a transaction, we always check to see whether the combination already exists. If it does, we link to that record in the InventDim table. If it doesn't exist, a new record is created in the InventDim table with the different dimensions filled out. Then, we can link to the new record. This is done by using a method called findOrCreate from the InventDim table. You can see how to use it in the following code: static void findingDimension(Args _args){InventDim           inventDim; // Set the values that you need for// the journal or transactioninventDim.InventLocationId = "01";inventDim.InventColorId = "02";// Use findOrCreate and use the inventDim// variable that you have set values intoinventDim = InventDim::findOrCreate(inventDim);// Use the inventDimId to link to the// InventDim record that was either// found or createdinfo(inventDim.inventDimId);} Finding the current on-hand information Another thing you should know is how to find how many items are available within a certain InventDim scope. You can do this by converting the InventDim fields into an InventDim variable and then specifying which dimension fields or combination of dimension fields you would like to get on-hand information on. In the following example, we search for the on-hand information for an item in a specific color: static void findingOnHandInfo(Args _args){ItemId              itemId;InventDim           inventDimCriteria;InventDimParm       inventDimParm;InventOnhand       inventOnhand; // Specify the item to get onhand info onitemId = "1001"; // initilise inventOnHandinventOnhand = InventOnhand::newItemId(itemId); // Specify the dimensions you want// to filter the onhand info oninventDimCriteria.InventColorId = "02";// Set the parameter flags active// according to which of the dimensions// in inventDimCriteria that are setinventDimParm.initFromInventDim(inventDimCriteria); // Specify the inventDim,// inventDimParm and itemIdinventOnhand.parmInventDim(inventDimCriteria);inventOnhand.parmInventDimParm(inventDimParm);inventOnhand.parmItemId(itemId); // Retrieve the onhand infoinfo(strfmt("Available Physical: %1",inventOnhand.availPhysical()));info(strfmt("On order: %1",inventOnhand.onOrder()));} You could easily narrow the information down to a specific warehouse by setting a warehouse (InventLocationId) to the inventDimCriteria method, as we have done for the InventColorId value. Finding on-hand information by a specific date The following example will let you find the on-hand quantity information of a specific item with a specific color (inventory dimension) at a specific date. The code is as follows: static void findingOnHandByDate(Args _args) { ItemId             itemId; InventDim           inventDimCriteria; InventDimParm       inventDimParm; InventSumDateDim   inventSumDateDim;   // Specify the item to get onhand info on itemId = "1001"; // Specify the dimensions you want // to filter the onhand info on inventDimCriteria.InventColorId = "02"; // Set the parameter flags active // accoring to which of the dimensions // in inventDimCriteria that are set inventDimParm.initFromInventDim(inventDimCriteria);   // Specify the transaction date, inventDimCriteria, // inventDimParm and itemId to receive a new object // of InventSumDateDim inventSumDateDim = InventSumDateDim::newParameters(mkdate(01,01,2014), itemId, inventDimCriteria, inventDimParm); // Retrieve the on hand info using the methods // of InventSumDateDim info(strfmt("PostedQty: %1",inventSumDateDim.postedQty())); info(strfmt("DeductedQty: %1",inventSumDateDim.deductedQty())); info(strfmt("ReceivedQty: %1",inventSumDateDim.receivedQty())); } Entering and posting an inventory journal from code One of the things you will need to know when dealing with the inventory module is how to automate journal entry and posting. This example will show you one way of doing this. This method is typically used when performing data migration. The code is as follows: static void enterPostInventJournal(Args _args) { InventJournalName       inventJournalName; InventJournalTable      inventJournalTable; InventJournalTrans     inventJournalTrans;   InventJournalTableData inventJournalTableData; InventJournalTransData inventJournalTransData;   InventTable             inventTable; InventDim               inventDim;   select firstonly inventTable; // find item   // find a movement journal name select firstOnly inventJournalName where inventJournalName.JournalType == InventJournalType::Movement;   // Initialize the values in the inventJournalTable // from the inventJournalName and insert the record inventJournalTable.clear(); inventJournalTable.initValue(); inventJournalTable.initFromInventJournalName (inventJournalName); inventJournalTable.insert();   inventJournalTableData = JournalTableData::newTable(inventJournalTable);   // Insert to records into the inventJournalTrans table inventJournalTrans.clear(); inventJournalTrans.initFromInventJournalTable (inventJournalTable); inventJournalTrans.TransDate = systemdateget(); inventJournalTrans.initFromInventTable(inventTable); inventJournalTrans.Qty = 3;   // Find the default dimension inventDim.initFromInventTable (inventJournalTrans.inventMovement().inventTable(), InventItemOrderSetupType::Invent,inventDim); // Set additional mandatory dimensions - based on item selected //inventDim.InventColorId     = "02"; //inventDim.InventSizeId     = "50"; //inventDim.configId         = "HD"; // See if the inventDim with the selected values // allready exist in the InventDim table. If not, it's // created automatically   inventDim = InventDim::findOrCreate(inventDim); inventJournalTrans.InventDimId = inventDim.inventDimId;   inventJournalTransData = inventJournalTableData.journalStatic() .newJournalTransData(inventJournalTrans, inventJournalTableData); inventJournalTransData.create();   // Use the InventJournalCheckPost class to    // post the journal if (InventJournalCheckPost::newPostJournal (inventJournalTable).validate()) InventJournalCheckPost::newPostJournal (inventJournalTable).run(); } The Ledger module The Ledger module (also known as general ledger) is where all financial transaction in AX is controlled and stored. All of the other modules are connected to the Ledger module in some way as it is the money central of AX. The main entity of the Ledger module is the Ledger table that consists of the chart of accounts. In addition, there are transaction tables and other tables related to the Ledger table and, hopefully, you will understand how some of these tables relate to each other by taking a look at the following entity schema from Microsoft: The following table gives a brief description of each of the tables shown in the Ledger entity schema in the preceding diagram: Table name Description Ledger This table contains the definitions of the general ledger accounts. GeneralJournalAccountEntry This table contains information about the general ledger entry. GeneralJournalEntry This table contains transaction metadata such as posting layer, transaction date, and so on. SubledgerVoucherGeneralJournalEntry This table links the subledger voucher to the general ledger entry. LedgerEntry This table contains general ledger bank transaction information. Posting ledger transactions There are two ways of posting ledger transactions: Ledger Journal: Enter data into a journal and post the journal using the LedgerJournalCheckPost class. Ledger Voucher: Create a list of vouchers (LedgerVoucherObject) and post them using the LedgerVoucher class. Entering and posting a LedgerJournal class The easiest way to post a ledger transaction is using the LedgerJournal class. This is similar to the example in the previous section of this article, where you learned how to fill the inventory journal with data and then post the journal from code. The code is as follows: In the following example, we will post a ledger transaction by filling LedgerJournalTable and LedgerJournalTrans with data and post the journal using the LedgerJournalCheckPost class: static void enterPostLedgerJournal(Args _args) { LedgerJournalName       ledgerJournalName; LedgerJournalTable     ledgerJournalTable; LedgerJournalTrans     ledgerJournalTrans;   LedgerJournalCheckPost ledgerJournalCheckPost; container               ledgerDimensions, offsetDimensions; Voucher                 voucher;   // Set ledger account information ledgerDimensions = ["22322","Chicago"]; offsetDimensions = ["22321","London"];   // You MUST have tts around the code or else // the numberSeq will generate an error while // trying to find the next available voucher ttsbegin;   // Specify which ledger journal to use ledgerJournalName = LedgerJournalName::find("GenJrn"); // Initialize the values in the ledgerJournalTable // from the ledgerJournalName and insert the record ledgerJournalTable.initFromLedgerJournalName (ledgerJournalName.JournalName); ledgerJournalTable.insert(); // Find the next available voucher number. voucher = new JournalVoucherNum (JournalTableData::newTable(ledgerJournalTable)).getNew(false);   ledgerJournalTrans.voucher = voucher; ledgerJournalTrans.initValue();   // Set the fields necessary in the ledgerJournalTrans ledgerJournalTrans.JournalNum     = ledgerJournalTable.JournalNum; ledgerJournalTrans.currencyCode    = 'USD'; ledgerJournalTrans.ExchRate         = Currency::exchRate(ledgerJournalTrans.currencyCode); ledgerJournalTrans.LedgerDimension = AxdDimensionUtil::getLedgerAccountId(ledgerDimensions); ledgerJournalTrans.AccountType     = ledgerJournalACType::Ledger; ledgerJournalTrans.AmountCurDebit   = 1220.00; ledgerJournalTrans.TransDate       = systemdateget(); ledgerJournalTrans.Txt             = 'Transfer to UK'; ledgerJournalTrans.OffsetDefaultDimension = AxdDimensionUtil::getLedgerAccountId(offsetDimensions); // Insert to records into the ledgerJournalTrans table ledgerJournalTrans.insert();   // Use the LedgerJournalCheckPost class to // post the journal ledgerJournalCheckPost = LedgerJournalCheckPost::construct(LedgerJournalType::Daily); // Set the JournalId, tableId of the journalTable // and specify to post the journal (not only check it). ledgerJournalCheckPost.parmJournalNum (ledgerJournalTable.JournalNum); ledgerJournalCheckPost.parmPost(NoYes::Yes); ledgerJournalCheckPost.run();   ttscommit; } You can also have AX check and validate the content of the journal before posting it. This can be a good idea, especially when the data originates from a different solution—for instance, when migrating data from the previous system and into AX. Entering and posting a LedgerVoucher class Using the LedgerVoucher classes to post ledger transactions means that you first create vouchers and then post them. This is more controlled and similar to the traditional way of dealing with posting of ledger transactions. The method is based on having a LedgerVoucher object where you can add multiple vouchers (LedgerVoucherObject). For each voucher, you typically have two transactions (LedgerVoucherTransObject), one for debit and one for credit. You can of course have more transactions in a voucher, but they have to be compatible. This means that if you add the amount of all of the credit transactions and all of the debit transactions, the result has to be 0. When all transactions have been added to a voucher and all vouchers have been added to the LedgerVoucher object, you can simply call the ledgerVoucher.end() method in order to validate and post the voucher. The code is as follows: static void createAndPostLedgerVoucher(Args _args) { AxLedgerJournalTable       ledgerJournalTable; AxLedgerJournalTrans       ledgerJournalTrans;   container ledgerDimensions, offsetDimensions;   // Set ledger account information ledgerDimensions = ["22322","Chicago"]; offsetDimensions = ["22321","London"];   ledgerJournalTable         = new AxLedgerJournalTable(); ledgerJournalTrans         = new AxLedgerJournalTrans();   ledgerJournalTable.parmJournalName("Gen"); ledgerJournalTable.save();   ledgerJournalTrans.parmAccountType(LedgerJournalACType::Ledger); ledgerJournalTrans.parmJournalNum (ledgerJournalTable.ledgerJournalTable().JournalNum);   ledgerJournalTrans.parmLedgerDimension (AxdDimensionUtil::getLedgerAccountId(ledgerDimensions)); ledgerJournalTrans.parmAmountCurDebit(500);   ledgerJournalTrans.parmOffsetLedgerDimension (AxdDimensionUtil::getLedgerAccountId(offsetDimensions));   ledgerJournalTrans.end(); } Summary In this article, you have seen how to trigger the standard functionality in AX, which is normally done manually using a code. Being able to automate processes and extend the standard functionality will most likely be something that you will do over and over again if you work with customer development. You should now know how the main tables and the transactional tables in the inventory and Ledger are related to each other. In the inventory section, you learned how to find an inventory dimension, how to find the current inventory stock for an item, how to find the inventory stock for an item by a specific date, and how to enter and post an inventory journal. In the The Ledger module section, you learned how to use financial dimensions in code and how to enter and post a ledger journal. Resources for Article: Further resources on this subject: Getting Started with Microsoft Dynamics CRM 2013 Marketing [article] Consuming Web Services using Microsoft Dynamics AX [article] Microsoft DAC 2012 [article]
Read more
  • 0
  • 0
  • 4329

Packt
24 Dec 2014
3 min read
Save for later

Introduction to Veeam® ONE™ Business View

Packt
24 Dec 2014
3 min read
In this article, by Kevin L. Sapp, author of the book Managing Virtual Infrastructure with Veeam® ONE™, we will have a look at how Veeam® ONE™ Business View allows you to group and manage your virtual infrastructure in business containers. This is helpful to split machines into function, priority, or any other descriptive category you would like. Veeam® ONE™ Business View displays the categorized information about VMs, clusters, and hosts in business terms. This perspective allows you to plan, control, and analyze the changes in the virtual environment. We will also have a look at data collection. (For more resources related to this topic, see here.) Data collection The data required to create the business topology is periodically collected from the connected virtual infrastructure servers. The data collection is usually run at a scheduled interval. However, you can also run the data collection manually. By default, after a virtual infrastructure server is connected to Veeam® ONE™, the collection is scheduled to run on a weekday at 2 a.m. If required, you can adjust the data collection schedule or switch to the manual collection mode to start each data collection session manually. Scheduling the data collection The best way to automate the collection of data is by creating a schedule for a specific VM server. To change the collection mode to Scheduled and to specify the time settings, use the following steps: Open the Veeam® ONE™ Business View web application by either double-clicking on the desktop icon or connecting to the Veeam® ONE™ server using a browser with the URL http://servername : 1340 by default. Click on the Configuration link located in the upper-right corner of the screen. Click on the VI Management Servers menu option located on the left-hand side of the screen.   Select the Run mode option for the server that you would like to change the schedule for.   While scheduling the data collection for the VM server, perform the following steps: Select the Periodically every option if you plan to run the data collection at a desired interval Select the Daily at this time option if you plan to run the data collection at a specific time of the day or week Once the schedule has been created, click on OK. Collecting data manually The following steps are needed to perform a manual collection of the virtual environment data. Use this procedure to collect data manually: Click on the Session History menu item on the left-hand side of the screen.   Click on the Run Now button for the server that you wish to run the data collection manually. The data collection normally takes a few minutes to run. However, it can vary based on the size and complexity of your infrastructure.   View the details of the session data by clicking on the server from the list shown in Session History. Summary In this article, we explained Veeam® ONE™ Business View. We discussed the steps needed to plan, control, and analyze the changes in the virtual environment. Resources for Article: Further resources on this subject: Configuring vShield App [article] Backups in the VMware View Infrastructure [article] Introduction to Veeam® Backup & Replication for VMware [article]
Read more
  • 0
  • 0
  • 8465

article-image-using-frameworks
Packt
24 Dec 2014
24 min read
Save for later

Using Frameworks

Packt
24 Dec 2014
24 min read
In this article by Alex Libby, author of the book Responsive Media in HTML5, we will cover the following topics: Adding responsive media to a CMS Implementing responsive media in frameworks such as Twitter Bootstrap Using the Less CSS preprocessor to create CSS media queries Ready? Let's make a start! (For more resources related to this topic, see here.) Introducing our three examples Throughout this article, we've covered a number of simple, practical techniques to make media responsive within our sites—these are good, but nothing beats seeing these principles used in a real-world context, right? Absolutely! To prove this, we're going to look at three examples throughout this article, using technologies that you are likely to be familiar with: WordPress, Bootstrap, and Less CSS. Each demo will assume a certain level of prior knowledge, so it may be worth reading up a little first. In all three cases, we should see that with little effort, we can easily add responsive media to each one of these technologies. Let's kick off with a look at working with WordPress. Adding responsive media to a CMS We will begin the first of our three examples with a look at using the ever popular WordPress system. Created back in 2003, WordPress has been used to host sites by small independent traders all the way up to Fortune 500 companies—this includes some of the biggest names in business such as eBay, UPS, and Ford. WordPress comes in two flavors; the one we're interested in is the self-install version available at http://www.wordpress.org. This example assumes you have a local installation of WordPress installed and working; if not, then head over to http://codex.wordpress.org/Installing_WordPress and follow the tutorial to get started. We will also need a DOM inspector such as Firebug installed if you don't already have it. It can be downloaded from http://www.getfirebug.com if you need to install it. If you only have access to WordPress.com (the other flavor of WordPress), then some of the tips in this section may not work, due to limitations in that version of WordPress. Okay, assuming we have WordPress set up and running, let's make a start on making uploaded media responsive. Adding responsive media manually It's at this point that you're probably thinking we have to do something complex when working in WordPress, right? Wrong! As long as you use the Twenty Fourteen core theme, the work has already been done for you. For this exercise, and the following sections, I will assume you have installed and/or activated WordPress' Twenty Fourteen theme. Don't believe me? It's easy to verify: try uploading an image to a post or page in WordPress. Resize the browser—you should see the image shrink or grow in size as the browser window changes size. If we take a look at the code elsewhere using Firebug, we can also see the height: auto set against a number of the img tags; this is frequently done for responsive images to ensure they maintain the correct proportions. The responsive style seems to work well in the Twenty Fourteen theme; if you are using an older theme, we can easily apply the same style rule to images stored in WordPress when using that theme. Fixing a responsive issue So far, so good. Now, we have the Twenty Fourteen theme in place, we've uploaded images of various sizes, and we try resizing the browser window ... only to find that the images don't seem to grow in size above a certain point. At least not well—what gives? Well, it's a classic trap: we've talked about using percentage values to dynamically resize images, only to find that we've shot ourselves in the foot (proverbially speaking, of course!). The reason? Let's dive in and find out using the following steps: Browse to your WordPress installation and activate Firebug using F12. Switch to the HTML tab and select your preferred image. In Firebug, look for the <header class="entry-header"> line, then look for the following line in the rendered styles on the right-hand side of the window: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 474px; } The keen-eyed amongst you should hopefully spot the issue straightaway—we're using percentages to make the sizes dynamic for each image, yet we're constraining its parent container! To fix this, change the highlighted line as indicated: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 100%; } To balance the content, we need to make the same change to the comments area. So go ahead and change max-width to 100% as follows: .comments-area { margin: 48px auto; max-width: 100%;   padding: 0 10px; } If we try resizing the browser window now, we should see the image size adjust automatically. At this stage, the change is not permanent. To fix this, we would log in to WordPress' admin area, go to Appearance | Editor and add the adjusted styles at the foot of the Stylesheet (style.css) file. Let's move on. Did anyone notice two rather critical issues with the approach used here? Hopefully, you must have spotted that if a large image is used and then resized to a smaller size, we're still working with large files. The alteration we're making has a big impact on the theme, even though it is only a small change. Even though it proves that we can make images truly responsive, it is the kind of change that we would not necessarily want to make without careful consideration and plenty of testing. We can improve on this. However, making changes directly to the CSS style sheet is not ideal; they could be lost when upgrading to a newer version of the theme. We can improve on this by either using a custom CSS plugin to manage these changes or (better) using a plugin that tells WordPress to swap an existing image for a small one automatically if we resize the window to a smaller size. Using plugins to add responsive images A drawback though, of using a theme such as Twenty Fourteen, is the resizing of images. While we can grow or shrink an image when resizing the browser window, we are still technically altering the size of what could potentially be an unnecessarily large image! This is considered bad practice (and also bad manners!)—browsing on a desktop with a fast Internet connection as it might not have too much of an impact; the same cannot be said for mobile devices, where we have less choice. To overcome this, we need to take a different approach—get WordPress to automatically swap in smaller images when we reach a particular size or breakpoint. Instead of doing this manually using code, we can take advantage of one of the many plugins available that offer responsive capabilities in some format. I feel a demo coming on. Now's a good time to take a look at one such plugin in action: Let's start by downloading our plugin. For this exercise, we'll use the PictureFill.WP plugin by Kyle Ricks, which is available at https://wordpress.org/plugins/picturefillwp/. We're going to use the version that uses Picturefill.js version 2. This is available to download from https://github.com/kylereicks/picturefill.js.wp/tree/master. Click on Download ZIP to get the latest version. Log in to the admin area of your WordPress installation and click on Settings and then Media. Make sure your image settings for Thumbnail, Medium, and Large sizes are set to values that work with useful breakpoints in your design. We then need to install the plugin. In the admin area, go to Plugins | Add New to install the plugin and activate it in the normal manner. At this point, we will have installed responsive capabilities in WordPress—everything is managed automatically by the plugin; there is no need to change any settings (except maybe the image sizes we talked about in step 2). Switch back to your WordPress frontend and try resizing the screen to a smaller size. Press F12 to activate Firebug and switch to the HTML tab. Press Ctrl + Shift + C (or Cmd + Shift + C for Mac users) to toggle the element inspector; move your mouse over your resized image. If we've set the right image sizes in WordPress' admin area and the window is resized correctly, we can expect to see something like the following screenshot: To confirm we are indeed using a smaller image, right-click on the image and select View Image Info; it will display something akin to the following screenshot: We should now have a fully functioning plugin within our WordPress installation. A good tip is to test this thoroughly, if only to ensure we've set the right sizes for our breakpoints in WordPress! What happens if WordPress doesn't refresh my thumbnail images properly? This can happen. If you find this happening, get hold of and install the Regenerate Thumbnails plugin to resolve this issue; it's available at https://wordpress.org/plugins/regenerate-thumbnails/. Adding responsive videos using plugins Now that we can add responsive images to WordPress, let's turn our attention to videos. The process of adding them is a little more complex; we need to use code to achieve the best effect. Let's examine our options. If you are hosting your own videos, the simplest way is to add some additional CSS style rules. Although this removes any reliance on JavaScript or jQuery using this method, the result isn't perfect and will need additional styles to handle the repositioning of the play button overlay. Although we are working locally, we should remember the note from earlier in this article; changes to the CSS style sheet may be lost when upgrading. A custom CSS plugin should be used, if possible, to retain any changes. To use a CSS-only solution, it only requires a couple of steps: Browse to your WordPress theme folder and open a copy of styles.css in your text editor of choice. Add the following lines at the end of the file and save it: video { width: 100%; height: 100%; max-width: 100%; } .wp-video { width: 100% !important; } .wp-video-shortcode {width: 100% !important; } Close the file. You now have the basics in place for responsive videos. At this stage, you're probably thinking, "great, my videos are now responsive. I can handle the repositioning of the play button overlay myself, no problem"; sounds about right? Thought so and therein lies the main drawback of this method! Repositioning the overlay shouldn't be too difficult. The real problem is in the high costs of hardware and bandwidth that is needed to host videos of any reasonable quality and that even if we were to spend time repositioning the overlay, the high costs would outweigh any benefit of using a CSS-only solution. A far better option is to let a service such as YouTube do all the hard work for you and to simply embed your chosen video directly from YouTube into your pages. The main benefit of this is that YouTube's servers do all the hard work for you. You can take advantage of an increased audience and YouTube will automatically optimize the video for the best resolution possible for the Internet connections being used by your visitors. Although aimed at beginners, wpbeginner.com has a useful article located at http://www.wpbeginner.com/beginners-guide/why-you-should-never-upload-a-video-to-wordpress/, on the pros and cons of why self-hosting videos isn't recommended and that using an external service is preferable. Using plugins to embed videos Embedding videos from an external service into WordPress is ironically far simpler than using the CSS method. There are dozens of plugins available to achieve this, but one of the simplest to use (and my personal favorite) is FluidVids, by Todd Motto, available at http://github.com/toddmotto/fluidvids/. To get it working in WordPress, we need to follow these steps using a video from YouTube as the basis for our example: Browse to your WordPress' theme folder and open a copy of functions.php in your usual text editor. At the bottom, add the following lines: add_action ( 'wp_enqueue_scripts', 'add_fluidvid' );   function add_fluidvid() { wp_enqueue_script( 'fluidvids',     get_stylesheet_directory_uri() .     '/lib/js/fluidvids.js', array(), false, true ); } Save the file, then log in to the admin area of your WordPress installation. Navigate to Posts | Add New to add a post and switch to the Text tab of your Post Editor, then add http://www.youtube.com/watch?v=Vpg9yizPP_g&hd=1 to the editor on the page. Click on Update to save your post, then click on View post to see the video in action. There is no need to further configure WordPress—any video added from services such as YouTube or Vimeo will be automatically set as responsive by the FluidVids plugin. At this point, try resizing the browser window. If all is well, we should see the video shrink or grow in size, depending on how the browser window has been resized: To prove that the code is working, we can take a peek at the compiled results within Firebug. We will see something akin to the following screenshot: For those of us who are not feeling quite so brave (!), there is fortunately a WordPress plugin available that will achieve the same results, without configuration. It's available at https://wordpress.org/plugins/fluidvids/ and can be downloaded and installed using the normal process for WordPress plugins. Let's change track and move onto our next demo. I feel a need to get stuck in some coding, so let's take a look at how we can implement responsive images in frameworks such as Bootstrap. Implementing responsive media in Bootstrap A question—as developers, hands up if you have not heard of Bootstrap? Good—not too many hands going down Why have I asked this question, I hear you say? Easy—it's to illustrate that in popular frameworks (such as Bootstrap), it is easy to add basic responsive capabilities to media, such as images or video. The exact process may differ from framework to framework, but the result is likely to be very similar. To see what I mean, let's take a look at using Bootstrap for our second demo, where we'll see just how easy it is to add images and video to our Bootstrap-enabled site. If you would like to explore using some of the free Bootstrap templates that are available, then http://www.startbootstrap.com/ is well worth a visit! Using Bootstrap's CSS classes Making images and videos responsive in Bootstrap uses a slightly different approach to what we've examined so far; this is only because we don't have to define each style property explicitly, but instead simply add the appropriate class to the media HTML for it to render responsively. For the purposes of this demo, we'll use an edited version of the Blog Page example, available at http://www.getbootstrap.com/getting-started/#examples; a copy of the edited version is available on the code download that accompanies this article. Before we begin, go ahead and download a copy of the Bootstrap Example folder that is in the code download. Inside, you'll find the CSS, image and JavaScript files needed, along with our HTML markup file. Now that we have our files, the following is a screenshot of what we're going to achieve over the course of our demo: Let's make a start on our example using the following steps: Open up bootstrap.html and look for the following lines (around lines 34 to 35):    <p class="blog-post-meta">January 1, 2014 by <a href="#">Mark</a></p>      <p>This blog post shows a few different types of content that's supported and styled with Bootstrap.         Basic typography, images, and code are all         supported.</p> Immediately below, add the following code—this contains markup for our embedded video, using Bootstrap's responsive CSS styling: <div class="bs-example"> <div class="embed-responsive embed-responsive-16by9">    <iframe allowfullscreen="" src="http://www.youtube.com/embed/zpOULjyy-n8?rel=0" class="embed-responsive-item"></iframe> </div> </div> With the video now styled, let's go ahead and add in an image—this will go in the About section on the right. Look for these lines, on or around lines 74 and 75:    <h4>About</h4>      <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet       fermentum. Aenean lacinia bibendum nulla sed       consectetur.</p> Immediately below, add in the following markup for our image: <a href="#" class="thumbnail"> <img src="http://placehold.it/350x150" class="img-responsive"> </a> Save the file and preview the results in a browser. If all is well, we can see our video and image appear, as shown at the start of our demo. At this point, try resizing the browser—you should see the video and placeholder image shrink or grow as the window is resized. However, the great thing about Bootstrap is that the right styles have already been set for each class. All we need to do is apply the correct class to the appropriate media file—.embed-responsive embed-responsive-16by9 for videos or .img-responsive for images—for that image or video to behave responsively within our site. In this example, we used Bootstrap's .img-responsive class in the code; if we have a lot of images, we could consider using img { max-width: 100%; height: auto; } instead. So far, we've worked with two popular examples of frameworks in the form of WordPress and Bootstrap. This is great, but it can mean getting stuck into a lot of CSS styling, particularly if we're working with media queries, as we saw earlier in the article! Can we do anything about this? Absolutely! It's time for a brief look at CSS preprocessing and how this can help with adding responsive media to our pages. Using Less CSS to create responsive content Working with frameworks often means getting stuck into a lot of CSS styling; this can become awkward to manage if we're not careful! To help with this, and for our third scenario, we're going back to basics to work on an alternative way of rendering CSS using the Less CSS preprocessing language. Why? Well, as a superset (or extension) of CSS, Less allows us to write our styles more efficiently; it then compiles them into valid CSS. The aim of this example is to show that if you're already using Less, then we can still apply the same principles that we've covered throughout this article, to make our content responsive. It should be noted that this exercise does assume a certain level of prior experience using Less; if this is the first time, you may like to peruse my article, Learning Less, by Packt Publishing. There will be a few steps involved in making the changes, so the following screenshot gives a heads-up on what it will look like, once we've finished: You would be right. If we play our cards right, there should indeed be no change in appearance; working with Less is all about writing CSS more efficiently. Let's see what is involved: We'll start by extracting copies of the Less CSS example from the code download that accompanies this article—inside it, we'll find our HTML markup, reset style sheet, images, and video needed for our demo. Save the folder locally to your PC. Next, add the following styles in a new file, saving it as responsive.less in the css subfolder—we'll start with some of the styling for the base elements, such as the video and banner: #wrapper {width: 96%; max-width: 45rem; margin: auto;   padding: 2%} #main { width: 60%; margin-right: 5%; float: left } #video-wrapper video { max-width: 100%; } #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width:   100%; float: left; margin-bottom: 15px; } #skipTo { display: none; li { background: #197a8a }; }   p { font-family: "Droid Sans",sans-serif; } aside { width: 35%; float: right; } footer { border-top: 1px solid #ccc; clear: both; height:   30px; padding-top: 5px; } We need to add some basic formatting styles for images and links, so go ahead and add the following, immediately below the #skipTo rule: a { text-decoration: none; text-transform: uppercase } a, img { border: medium none; color: #000; font-weight: bold; outline: medium none; } Next up comes the navigation for our page. These styles control the main navigation and the Skip To… link that appears when viewed on smaller devices. Go ahead and add these style rules immediately below the rules for a and img: header { font-family: 'Droid Sans', sans-serif; h1 { height: 70px; float: left; display: block; fontweight: 700; font-size: 2rem; } nav { float: right; margin-top: 40px; height: 22px; borderradius: 4px; li { display: inline; margin-left: 15px; } ul { font-weight: 400; font-size: 1.1rem; } a { padding: 5px 5px 5px 5px; &:hover { background-color: #27a7bd; color: #fff; borderradius: 4px; } } } } We need to add the media query that controls the display for smaller devices, so go ahead and add the following to a new file and save it as media.less in the css subfolder. We'll start with setting the screen size for our media query: @smallscreen: ~"screen and (max-width: 30rem)";   @media @smallscreen { p { font-family: "Droid Sans", sans-serif; }      #main, aside { margin: 0 0 10px; width: 100%; }    #banner { margin-top: 150px; height: 4.85rem; max-width: 100%; background-image: url('../img/abstract-     banner-medium.jpg'); width: 45.5rem; } Next up comes the media query rule that will handle the Skip To… link at the top of our resized window:    #skipTo {      display: block; height: 18px;      a {         display: block; text-align: center; color: #fff; font-size: 0.8rem;        &:hover { background-color: #27a7bd; border-radius: 0; height: 20px }      }    } We can't forget the main navigation, so go ahead and add the following line of code immediately below the block for #skipTo:    header {      h1 { margin-top: 20px }      nav {        float: left; clear: left; margin: 0 0 10px; width:100%;        li { margin: 0; background: #efefef; display:block; margin-bottom: 3px; height: 40px; }        a {          display: block; padding: 10px; text-align:center; color: #000;          &:hover {background-color: #27a7bd; border-radius: 0; padding: 10px; height: 20px; }        }     }    } } At this point, we should then compile the Less style sheet before previewing the results of our work. If we launch responsive.html in a browser, we'll see our mocked up portfolio page appear as we saw at the beginning of the exercise. If we resize the screen to its minimum width, its responsive design kicks in to reorder and resize elements on screen, as we would expect to see. Okay, so we now have a responsive page that uses Less CSS for styling; it still seems like a lot of code, right? Working through the code in detail Although this seems like a lot of code for a simple page, the principles we've used are in fact very simple and are the ones we already used earlier in the article. Not convinced? Well, let's look at it in more detail—the focus of this article is on responsive images and video, so we'll start with video. Open the responsive.css style sheet and look for the #video-wrapper video class: #video-wrapper video { max-width: 100%; } Notice how it's set to a max-width value of 100%? Granted, we don't want to resize a large video to a really small size—we would use a media query to replace it with a smaller version. But, for most purposes, max-width should be sufficient. Now, for the image, this is a little more complicated. Let's start with the code from responsive.less: #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width: 100%; float: left; margin-bottom: 15px; } Here, we used the max-width value again. In both instances, we can style the element directly, unlike videos where we have to add a container in order to style it. The theme continues in the media query setup in media.less: @smallscreen: ~"screen and (max-width: 30rem)"; @media @smallscreen { ... #banner { margin-top: 150px; background-image: url('../img/abstract-banner-medium.jpg'); height: 4.85rem;     width: 45.5rem; max-width: 100%; } ... } In this instance, we're styling the element to cover the width of the viewport. A small point of note; you might ask why we are using the rem values instead of the percentage values when styling our image? This is a good question—the key to it is that when using pixel values, these do not scale well in responsive designs. However, the rem values do scale beautifully; we could use percentage values if we're so inclined, although they are best suited to instances where we need to fill a container that only covers part of the screen (as we did with the video for this demo). An interesting article extolling the virtues of why we should use rem units is available at http://techtime.getharvest.com/blog/in-defense-of-rem-units - it's worth a read. Of particular note is a known bug with using rem values in Mobile Safari, which should be considered when developing for mobile platforms; with all of the iPhones available, its usage could be said to be higher than Firefox! For more details, head over to http://wtfhtmlcss.com/#rems-mobile-safari. Transferring to production use Throughout this exercise, we used Less to compile our styles on the fly each time. This is okay for development purposes, but is not recommended for production use. Once we've worked out the requisite styles needed for our site, we should always look to precompile them into valid CSS before uploading the results into our site. There are a number of options available for this purpose; two of my personal favorites are Crunch! available at http://www.crunchapp.net and the Less2CSS plugin for Sublime Text available at https://github.com/timdouglas/sublime-less2css. You can learn more about precompiling Less code from my new article, Learning Less.js, by Packt Publishing. Summary Wow! We've certainly covered a lot; it shows that adding basic responsive capabilities to media need not be difficult. Let's take a moment to recap on what you learned. We kicked off this article with an introduction to three real-word scenarios that we would then cover. Our first scenario looked at using WordPress. We covered how although we can add simple CSS styling to make images and videos responsive, the preferred method is to use one of the several plugins available to achieve the same result. Our next scenario visited the all too familiar framework known as Twitter Bootstrap. In comparison, we saw that this is a much easier framework to work with, in that styles have been predefined and that all we needed to do was add the right class to the right selector. Our third and final scenario went completely the opposite way, with a look at using the Less CSS preprocessor to handle the styles that we would otherwise have manually created. We saw how easy it was to rework the styles we originally created earlier in the article to produce a more concise and efficient version that compiled into valid CSS with no apparent change in design. Well, we've now reached the end of the book; all good things must come to an end at some point! Nonetheless, I hope you've enjoyed reading the book as much as I have writing it. Hopefully, I've shown that adding responsive media to your sites need not be as complicated as it might first look and that it gives you a good grounding to develop something more complex using responsive media. Resources for Article: Further resources on this subject: Styling the Forms [article] CSS3 Animation [article] Responsive image sliders [article]
Read more
  • 0
  • 41
  • 24202
article-image-replication
Packt
24 Dec 2014
5 min read
Save for later

Cassandra High Availability: Replication

Packt
24 Dec 2014
5 min read
This article by Robbie Strickland, the author of Cassandra High Availability, describes the data replication architecture used in Cassandra. Replication is perhaps the most critical feature of a distributed data store, as it would otherwise be impossible to make any sort of availability guarantee in the face of a node failure. As you already know, Cassandra employs a sophisticated replication system that allows fine-grained control over replica placement and consistency guarantees. In this article, we'll explore Cassandra's replication mechanism in depth. Let's start with the basics: how Cassandra determines the number of replicas to be created and where to locate them in the cluster. We'll begin the discussion with a feature that you'll encounter the very first time you create a keyspace: the replication factor. (For more resources related to this topic, see here.) The replication factor On the surface, setting the replication factor seems to be a fundamentally straightforward idea. You configure Cassandra with the number of replicas you want to maintain (during keyspace creation), and the system dutifully performs the replication for you, thus protecting you when something goes wrong. So by defining a replication factor of three, you will end up with a total of three copies of the data. There are a number of variables in this equation. Let's start with the basic mechanics of setting the replication factor. Replication strategies One thing you'll quickly notice is that the semantics to set the replication factor depend on the replication strategy you choose. The replication strategy tells Cassandra exactly how you want replicas to be placed in the cluster. There are two strategies available: SimpleStrategy: This strategy is used for single data center deployments. It is fine to use this for testing, development, or simple clusters, but discouraged if you ever intend to expand to multiple data centers (including virtual data centers such as those used to separate analysis workloads). NetworkTopologyStrategy: This strategy is used when you have multiple data centers, or if you think you might have multiple data centers in the future. In other words, you should use this strategy for your production cluster. SimpleStrategy As a way of introducing this concept, we'll start with an example using SimpleStrategy. The following Cassandra Query Language (CQL) block will allow us to create a keyspace called AddressBook with three replicas: CREATE KEYSPACE AddressBookWITH REPLICATION = {   'class' : 'SimpleStrategy',   'replication_factor' : 3}; The data is assigned to a node via a hash algorithm, resulting in each node owning a range of data. Let's take another look at the placement of our example data on the cluster. Remember the keys are first names, and we determined the hash using the Murmur3 hash algorithm. The primary replica for each key is assigned to a node based on its hashed value. Each node is responsible for the region of the ring between itself (inclusive) and its predecessor (exclusive). While using SimpleStrategy, Cassandra will locate the first replica on the owner node (the one determined by the hash algorithm), then walk the ring in a clockwise direction to place each additional replica, as follows: Additional replicas are placed in adjacent nodes when using manually assigned tokens In the preceding diagram, the keys in bold represent the primary replicas (the ones placed on the owner nodes), with subsequent replicas placed in adjacent nodes, moving clockwise from the primary. Although each node owns a set of keys based on its token range(s), there is no concept of a master replica. In Cassandra, unlike make other database designs, every replica is equal. This means reads and writes can be made to any node that holds a replica of the requested key. If you have a small cluster where all nodes reside in a single rack inside one data center, SimpleStrategy will do the job. This makes it the right choice for local installations, development clusters, and other similar simple environments where expansion is unlikely because there is no need to configure a snitch (which will be covered later in this section). For production clusters, however, it is highly recommended that you use NetworkTopologyStrategy instead. This strategy provides a number of important features for more complex installations where availability and performance are paramount. NetworkTopologyStrategy When it's time to deploy your live cluster, NetworkTopologyStrategy offers two additional properties that make it more suitable for this purpose: Rack awareness: Unlike SimpleStrategy, which places replicas naively, this feature attempts to ensure that replicas are placed in different racks, thus preventing service interruption or data loss due to failures of switches, power, cooling, and other similar events that tend to affect single racks of machines. Configurable snitches: A snitch helps Cassandra to understand the topology of the cluster. There are a number of snitch options for any type of network configuration. Here's a basic example of a keyspace using NetworkTopologyStrategy: CREATE KEYSPACE AddressBookWITH REPLICATION = {   'class' : 'NetworkTopologyStrategy',   'dc1' : 3,   'dc2' : 2}; In this example, we're telling Cassandra to place three replicas in a data center called dc1 and two replicas in a second data center called dc2. Summary In this article, we introduced the foundational concepts of replication and consistency. In our discussion, we outlined the importance of the relationship between replication factor and consistency level, and their impact on performance, data consistency, and availability. By now, you should be able to make sound decisions specific to your use cases. This article might serve as a handy reference in the future as it can be challenging to keep all these details in mind. Resources for Article: Further resources on this subject: An overview of architecture and modeling in Cassandra [Article] Basic Concepts and Architecture of Cassandra [Article] About Cassandra [Article]
Read more
  • 0
  • 0
  • 2972

article-image-analyzing-data
Packt
24 Dec 2014
13 min read
Save for later

Analyzing Data

Packt
24 Dec 2014
13 min read
In this article by Amarpreet Singh Bassan and Debarchan Sarkar, authors of Mastering SQL Server 2014 Data Mining, we will begin our discussion with an introduction to the data mining life cycle, and this article will focus on its first three stages. You are expected to have basic understanding of the Microsoft business intelligence stack and familiarity of terms such as extract, transform, and load (ETL), data warehouse, and so on. (For more resources related to this topic, see here.) Data mining life cycle Before going into further details, it is important to understand the various stages of the data mining life cycle. The data mining life cycle can be broadly classified into the following steps: Understanding the business requirement. Understanding the data. Preparing the data for the analysis. Preparing the data mining models. Evaluating the results of the analysis prepared with the models. Deploying the models to the SQL Server Analysis Services Server. Repeating steps 1 to 6 in case the business requirement changes. Let's look at each of these stages in detail. The first and foremost task that needs to be well defined even before beginning the mining process is to identify the goals. This is a crucial part of the data mining exercise and you need to understand the following questions: What and whom are we targeting? What is the outcome we are targeting? What is the time frame for which we have the data and what is the target time period that our data is going to forecast? What would the success measures look like? Let's define a classic problem and understand more about the preceding questions. We can use them to discuss how to extract the information rather than spending our time on defining the schema. Consider an instance where you are a salesman for the AdventureWorks Cycle company, and you need to make predictions that could be used in marketing the products. The problem sounds simple and straightforward, but any serious data miner would immediately come up with many questions. Why? The answer lies in the exactness of the information being searched for. Let's discuss this in detail. The problem statement comprises the words predictions and marketing. When we talk about predictions, there are several insights that we seek, namely: What is it that we are predicting? (for example: customers, product sales, and so on) What is the time period of the data that we are selecting for prediction? What time period are we going to have the prediction for? What is the expected outcome of the prediction exercise? From the marketing point of view, several follow-up questions that must be answered are as follows: What is our target for marketing, a new product or an older product? Is our marketing strategy product centric or customer centric? Are we going to market our product irrespective of the customer classification, or are we marketing our product according to customer classification? On what timeline in the past is our marketing going to be based on? We might observe that there are many questions that overlap the two categories and therefore, there is an opportunity to consolidate the questions and classify them as follows: What is the population that we are targeting? What are the factors that we will actually be looking at? What is the time period of the past data that we will be looking at? What is the time period in the future that we will be considering the data mining results for? Let's throw some light on these aspects based on the AdventureWorks example. We will get answers to the preceding questions and arrive at a more refined problem statement. What is the population that we are targeting? The target population might be classified according to the following aspects: Age Salary Number of kids What are the factors that we are actually looking at? They might be classified as follows: Geographical location: The people living in hilly areas would prefer All Terrain Bikes (ATB) and the population on plains would prefer daily commute bikes. Household: The people living in posh areas would look for bikes with the latest gears and also look for accessories that are state of the art, whereas people in the suburban areas would mostly look for budgetary bikes. Affinity of components: The people who tend to buy bikes would also buy some accessories. What is the time period of the past data that we would be looking at? Usually, the data that we get is quite huge and often consists of the information that we might very adequately label as noise. In order to sieve effective information, we will have to determine exactly how much into the past we should look; for example, we can look at the data for the past year, past two years, or past five years. We also need to decide the future data that we will consider the data mining results for. We might be looking at predicting our market strategy for an upcoming festive season or throughout the year. We need to be aware that market trends change and so does people's needs and requirements. So we need to keep a time frame to refresh our findings to an optimal; for example, the predictions from the past 5 years data can be valid for the upcoming 2 or 3 years depending upon the results that we get. Now that we have taken a closer look into the problem, let's redefine the problem more accurately. AdventureWorks has several stores in various locations and based on the location, we would like to get an insight on the following: Which products should be stocked where? Which products should be stocked together? How much of the products should be stocked? What is the trend of sales for a new product in an area? It is not necessary that we will get answers to all the detailed questions but even if we keep looking for the answers to these questions, there would be several insights that we will get, which will help us make better business decisions. Staging data In this phase, we collect data from all the sources and dump them into a common repository, which can be any database system such as SQL Server, Oracle, and so on. Usually, an organization might have various applications to keep track of the data from various departments, and it is quite possible that all these applications might use a different database system to store the data. Thus, the staging phase is characterized by dumping the data from all the other data storage systems to a centralized repository. Extract, transform, and load This term is most common when we talk about data warehouse. As it is clear, ETL has the following three parts: Extract: The data is extracted from a different source database and other databases that might contain the information that we seek Transform: Some transformation is applied to the data to fit the operational needs, such as cleaning, calculation, removing duplicates, reformatting, and so on Load: The transformed data is loaded into the destination data store database We usually believe that the ETL is only required till we load the data onto the data warehouse but this is not true. ETL can be used anywhere that we feel the need to do some transformation of data as shown in the following figure: Data warehouse As evident from the preceding figure, the next stage is the data warehouse. The AdventureWorksDW database is the outcome of the ETL applied to the staging database, which is AdventureWorks. We will now discuss the concepts of data warehousing and some best practices and then relate to these concepts with the help of AdventureWorksDW database. Measures and dimensions There are a few common terminologies you will encounter as you enter the world of data warehousing. They are as follows: Measure: Any business entity that can be aggregated or whose values can be ascertained in a numerical value is termed as measure, for example, sales, number of products, and so on Dimension: This is any business entity that lends some meaning to the measures, for example, in an organization, the quantity of goods sold is a measure but the month is a dimension Schema A schema, basically, determines the relationship of the various entities with each other. There are essentially two types of schema, namely: Star schema: This is a relationship where the measures have a direct relationship with the dimensions. Let's look at an instance wherein a seller has several stores that sell several products. The relationship of the tables based on the star schema will be as shown in the following screenshot: Snowflake schema: This is a relationship wherein the measures may have a direct and indirect relationship with the dimensions. We will be designing a snowflake schema if we want a more detailed drill down of the data. Snowflake schema usually would involve hierarchies, as shown in the following screenshot: Data mart While a data warehouse is a more organization-wide repository of data, extracting data from such a huge repository might well be an uphill task. We segregate the data according to the department or the specialty that the data belongs to, so that we have much smaller sections of the data to work with and extract information from. We call these smaller data warehouses data marts. Let's consider the sales for AdventureWorks cycles. To make any predictions on the sales of AdventureWorks, we will have to group all the tables associated with the sales together in a data mart. Based on the AdventureWorks database, we have the following table in the AdventureWorks sales data mart. The Internet sales facts table has the following data: [ProductKey][OrderDateKey][DueDateKey][ShipDateKey][CustomerKey][PromotionKey][CurrencyKey][SalesTerritoryKey][SalesOrderNumber][SalesOrderLineNumber][RevisionNumber][OrderQuantity][UnitPrice][ExtendedAmount][UnitPriceDiscountPct][DiscountAmount][ProductStandardCost][TotalProductCost][SalesAmount][TaxAmt][Freight][CarrierTrackingNumber][CustomerPONumber][OrderDate][DueDate][ShipDate] From the preceding column, we can easily identify that if we need to separate the tables to perform the sales analysis alone, we can safely include the following: Product: This provides the following data: [ProductKey][ListPrice] Date: This provides the following data: [DateKey] Customer: This provides the following data: [CustomerKey] Currency: This provides the following data: [CurrencyKey] Sales territory: This provides the following data: [SalesTerritoryKey] The preceding data will provide the relevant dimensions and the facts that are already contained in the FactInternetSales table and hence, we can easily perform all the analysis pertaining to the sales of the organization. Refreshing data Based on the nature of the business and the requirements of the analysis, refreshing of data can be done either in parts wherein new or incremental data is added to the tables, or we can refresh the entire data wherein the tables are cleaned and filled with new data, which consists of the old and new data. Let's discuss the preceding points in the context of the AdventureWorks database. We will take the employee table to begin with. The following is the list of columns in the employee table: [BusinessEntityID],[NationalIDNumber],[LoginID],[OrganizationNode],[OrganizationLevel],[JobTitle],[BirthDate],[MaritalStatus],[Gender],[HireDate],[SalariedFlag],[VacationHours],[SickLeaveHours],[CurrentFlag],[rowguid],[ModifiedDate] Considering an organization in the real world, we do not have a large number of employees leaving and joining the organization. So, it will not really make sense to have a procedure in place to reload the dimensions, prior to SQL 2008. When it comes to managing the changes in the dimensions table, Slowly Changing Dimensions (SCD) is worth a mention. We will briefly look at the SCD here. There are three types of SCD, namely: Type 1: The older values are overwritten by new values Type 2: A new row specifying the present value for the dimension is inserted Type 3: The column specifying TimeStamp from which the new value is effective is updated Let's take the example of HireDate as a method of keeping track of the incremental loading. We will also have to maintain a small table that will keep a track of the data that is loaded from the employee table. So, we create a table as follows: Create table employee_load_status(HireDate DateTime,LoadStatus varchar); The following script will load the employee table from the AdventureWorks database to the DimEmployee table in the AdventureWorksDW database: With employee_loaded_date(HireDate) as(select ISNULL(Max(HireDate),to_date('01-01-1900','MM-DD-YYYY')) fromemployee_load_status where LoadStatus='success'Union AllSelect ISNULL(min(HireDate),to_date('01-01-1900','MM-DD-YYYY')) fromemployee_load_status where LoadStatus='failed')Insert into DimEmployee select * from employee where HireDate>=(select Min(HireDate) from employee_loaded_date); This will reload all the data from the date of the first failure till the present day. A similar procedure can be followed to load the fact table but there is a catch. If we look at the sales table in the AdventureWorks table, we see the following columns: [BusinessEntityID],[TerritoryID],[SalesQuota],[Bonus],[CommissionPct],[SalesYTD],[SalesLastYear],[rowguid],[ModifiedDate] The SalesYTD column might change with every passing day, so do we perform a full load every day or do we perform an incremental load based on date? This will depend upon the procedure used to load the data in the sales table and the ModifiedDate column. Assuming the ModifiedDate column reflects the date on which the load was performed, we also see that there is no table in the AdventureWorksDW that will use the SalesYTD field directly. We will have to apply some transformation to get the values of OrderQuantity, DateOfShipment, and so on. Let's look at this with a simpler example. Consider we have the following sales table: Name SalesAmount Date Rama 1000 11-02-2014 Shyama 2000 11-02-2014 Consider we have the following fact table: id SalesAmount Datekey We will have to think of whether to apply incremental load or a complete reload of the table based on our end needs. So the entries for the incremental load will look like this: id SalesAmount Datekey ra 1000 11-02-2014 Sh 2000 11-02-2014 Ra 4000 12-02-2014 Sh 5000 13-02-2014 Also, a complete reload will appear as shown here: id TotalSalesAmount Datekey Ra 5000 12-02-2014 Sh 7000 13-02-2014 Notice how the SalesAmount column changes to TotalSalesAmount depending on the load criteria. Summary In this article, we've covered the first three steps of any data mining process. We've considered the reasons why we would want to undertake a data mining activity and identified the goal we have in mind. We then looked to stage the data and cleanse it. Resources for Article: Further resources on this subject: Hadoop and SQL [Article] SQL Server Analysis Services – Administering and Monitoring Analysis Services [Article] SQL Server Integration Services (SSIS) [Article]
Read more
  • 0
  • 0
  • 2929
Modal Close icon
Modal Close icon