Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-random-value-generators-elm
Eduard Kyvenko
19 Dec 2016
5 min read
Save for later

Random Value Generators in Elm

Eduard Kyvenko
19 Dec 2016
5 min read
Purely functional nature of Elm leads to certain implications when used for generating random values. On the other hand, it opens up a completely new dimension for producing values of any desired shape, which is extremely useful in some cases. This article covers the core concepts for working with Random module. JavaScript offers Math.random as a way of producing random numbers; it does not expect a seed unlike traditional Pseudorandom number generator. Even though Elm is compiled to JavaScript, it does not rely on the native implementation for random numbers generation. It gives you more control by offering an API for both producing random values without explicitly specifying the seed and having the option to specify the seed explicitly and preserve its state. Both ways have tradeoffs and should be used in different situations. Random values without a Seed Before digging deeper I recommend that you look into the official Elm Guide Effects / Random, where you will find the most basic example of Random.generate. It is the easiest way to put your hands on random values. There are some significant tradeoffs you should be aware of. It relies on Time.now behind the scenes, which means you cannot guarantee efficient randomness if you run this command multiple times consecutively. In other words, there is a risk of getting the same value from running Random.generate multiple times within a short period of time. A good use case for this kind of command would be generating the seed for future, more efficient, and secure random values. I have written a little seed generator, which can be used for providing a seed for future Random.step calls: seedGenerator : Generator Seed seedGenerator = Random.int Random.minInt Random.maxInt |> Random.map (Random.initialSeed) Current time serves as a seed for Random.generate, and as you might know already, retrieving current time from the JavaScript is a side effect. Every value will arrive with a message. I will go ahead and define it; the generator will return value of the Seed type: type Msg = Update Seed init = ( { seed = Nothing } -- Initial command to create independent Seed. , Random.generate Update seedGenerator ) Storing the seed as a Maybe value makes a lot of sense since it is not present in the model at the very beginning. The initial application state will execute the generator and produce a message with a new seed, which will be accessible inside the update function: update msg model = case msg of Update seed -> -- Save newly created Seed into state. ( { model | seed = Just seed }, Cmd.none ) This concludes the initial setup for using random value generators with a Seed. As I have mentioned already, Random.generate is a not statistically reliable source of random values, therefore you should avoid relying on it too much in situations when you need multiple random values at the time. Random values with a Seed Using Random.step might be a little hard at the start. The type definition annotation for this function suggests that you will get a tuple with your newly generated value and the next seed state for future steps: Generator a -> Seed -> (a, Seed) This example application will put every new random value on a stack and display that in the DOM: I will extend the model with an additional key for saving random integers: type alias Model = { seed : Maybe Seed , stack : List Int } In the new handler for putting random values on the stack, I heavily rely on Maybe.map. It is very convenient when you want to make an impossible state impossible. In this case, I don’t want to generate any new values if the seed is missing for some reason: update msg model = case msg of Update seed -> -- Preserve newly initialized Seed state. ( { model | seed = Just seed }, Cmd.none ) PutRandomNumber -> let {- In case if seed was present, new model will contain the new value and a new state for the seed. -} newModel : Model newModel = model.seed |> Maybe.map (Random.step (Random.int 0 10)) |> Maybe.map (( number, seed ) -> { model | seed = Just seed , stack = number :: model.stack } ) |> Maybe.withDefault model in ( newModel , Cmd.none ) In short, the new branch will generate a random integer and a new seed and update the model with those new values if the seed was present. This concludes the basic example of Random.step usage, but there’s a lot more to learn. Generators You can get pretty far with Generator and define something more complex than just an integer. Let’s define a generator for producing random stats for calculating BMI: type alias Model = { seed : Maybe Seed , stack : List BMI } type alias BMI = { weight : Float , height : Float , bmi : Float } valueGenerator : Generator BMI valueGenerator = Random.map2 (w h -> BMI w h (w / (h * h))) (Random.float 60 150) (Random.float 0.6 1.2) Random.map allows using values from a passed generator and applying a function to the results, which is very convenient for making simple calculations, such as BMI: You can raise the bar with Random.andThen and produce generators based on random values. This is super useful for making combinations without repeats. Check the source of this example application on GitHub elm-examples/random-initial-seed Conclusion Elm offers a powerful abstraction for declarative definition of random value generators. Building values of any complex shape becomes quite simple by combining the power of Random.map. However, it might be a little overwhelming after JavaScript or any other imperative language. Give it a chance, maybe you will need a reliable generator for custom values in your next project! About the author Eduard Kyvenko is a frontend lead at Dinero. He has been working with Elm for over half a year and has built a tax return and financial statements app for Dinero. You can find him on GitHub at @halfzebra.
Read more
  • 0
  • 0
  • 3477

article-image-storing-records-and-interface-customization
Packt
16 Dec 2016
18 min read
Save for later

Storing Records and Interface customization

Packt
16 Dec 2016
18 min read
 In this article by Paul Goody, the author of the book Salesforce CRM - The Definitive Admin Handbook - Fourth Edition, we will describe in detail the Salesforce CRM record storage features and user interface that can be customized, such as objects, fields, and page layouts. In addition, we will see an overview of the relationship that exists between the profile and these customizable features that the profile controls. This article looks at the methods to configure and tailor the application to suit the way your company information can be best represented within the Salesforce CRM application. We will look at the mechanisms to store data in Salesforce and the concepts of objects and fields. The features that allow this data to be grouped and arranged presented within the application are then considered by looking at apps, tabs, page layouts, and record types. Finally, we will take a look at some of the features that allow views of data to be presented and customized by looking in detail at related types, related lists, and list views. Finally, you will be presented with a number of questions about the key features of Salesforce CRM administration in the area of Standard and Custom Objects, which are covered in this article. We will cover the following topics in this article: Objects Fields Object relationships Apps Tabs Renaming labels for standard tabs, standard objects, and standard fields Creating custom objects Object limits Creating custom object relationships Creating custom fields Dependent picklists Building relationship fields Lookup relationship options Master detail relationship options Lookup filters Building formulas Basic formula Advanced formula Building formulas--best practices Building formula text and compiled character size limits Custom field governance Page layouts Freed-based page layouts Record types Related lists (For more resources related to this topic, see here.) The relationship between a profile and the features that it controls The following diagram describes the relationship that exists between a profile and the features that it controls: The profile is used to: Control access to the type of license specified for the user and any login hours or IP address restrictions that are set. Control access to objects and records using the role and sharing model. If the appropriate object-level permission is not set on the user's profile, then the user will be unable to gain access to the records of that object type in the application. In this article, we will look at the configurable elements that are set in conjunction with a profile. These are used to control the structure and the user interface for the Salesforce CRM application. Objects Objects are a key element in Salesforce CRM as they provide a structure to store data and are incorporated in the interface, allowing users to interact with the data. Similar in nature to a database table, objects have the following properties: Fields, which are similar in concept to a database column Records, which are similar in concept to a database row Relationships with other objects Optional tabs, which are user-interface components to display the object data Standard objects Salesforce provides standard objects in the application when you sign up; these include Account, Contact, Opportunity, and so on. These are the tables that contain the data records in any standard tab, such as Accounts, Contacts, and Opportunities. In addition to the standard objects, you can create custom objects and tabs. Custom objects Custom objects are the tables you create to store your data. You can create a custom object to store data specific to your organization. Once you have the custom objects, and have created records for these objects, you can also create reports and dashboards based on the record data in your custom object. Fields Fields in Salesforce are similar in concept to a database column: they store the data for the object records. An object record is analogous to a row in a database table. Standard fields Standard fields are predefined fields that are included as standard within the Salesforce CRM application. Standard fields cannot be deleted but non-required standard fields can be removed from page layouts, whenever necessary. With standard fields, you can customize visual elements that are associated to the field, such as field labels and field-level help, as well as certain data definitions, such as picklist values, the formatting of auto-number fields (which are used as unique identifiers for the records), and setting of field history tracking. Some aspects, however, such as the field name, cannot be customized, and some standard fields, such as Opportunity Probability, do not allow the changing of the field label. Custom fields Custom fields are unique to your business needs and can not only be added and amended, but also deleted. Creating custom fields allow you to store the information that is necessary for your organization. Both standard and custom fields can be customized to include custom help text to help users understand how to use the field as shown in following screenshot Object relationships Object relationships can be set on both standard and custom objects and are used to define how records in one object relate to records in another object. Accounts, for example, can have a one-to-many relationship with opportunities; these relationships are presented in the application as related lists. Apps An app in Salesforce is a container for all the objects, tabs, processes, and services associated with a business function. There are standard and custom apps that are accessed using the App menu located at the top-right corner of the Salesforce page, as shown in the following screenshot: When users select an app from the App menu, their screen changes to present the objects associated with that app. For example, when switching from an app that contains the Campaign tab to one that does not, the Campaign tab no longer appears. This feature is applied to both standard and custom apps. Standard apps Salesforce provides standard apps such as Call Center, Community, Content, MarketingSales, Salesforce Chatter, and Site.com. Custom apps A custom app can optionally include a custom logo. Both standard and custom apps consist of a name, a description, and an ordered list of tabs. Subtab apps A subtab app is used to specify the tabs that appear on the Chatter profile page. Subtab apps can include both default and custom tabs that you can set. Tabs A tab is a user-interface element that, when clicked, displays the record data on a page specific to that object. Hiding and showing tabs To customize your personal tab settings, navigate to Setup | My Personal Settings | Change My Display | Customize My Tabs. Now, choose the tabs that will display in each of your apps by moving the tab name between the Available Tabs and the Selected Tabs sections and click on Save. The following screenshot shows the section of tabs for the Sales app: To customize the tab settings of your users, navigate to Setup | Manage Users | Profiles. Now, select a profile and click on Edit. Scroll down to the Tab Settings section of the page, as shown in the following screenshot: Standard tabs Salesforce provides tabs for each of the standard objects that are provided in the application when you sign up. For example, there are standard tabs for Accounts, Contacts, Opportunities, and so on: Visibility of the tab depends on the setting on the Tab Display setting for the app. Custom tabs You can create three different types of custom tabs: Custom Object Tabs, Web Tabs, and Visualforce Tabs. Custom Object Tabs allow you to create, read, update, and delete the data records in your custom objects. Web Tabs display any web URL in a tab within your Salesforce application. Visualforce Tabs display custom user-interface pages created using Visualforce. Creating custom tabs: The text displayed on the Custom tab is set using the Plural Label of the custom object, which is entered when creating the custom object. If the tab text needs to be changed, this can be done by changing the Plural Label stored in the custom object. Salesforce.com recommends selecting the Append tab to a user’s existing personal customization checkbox. This benefits your users as they will automatically be presented with the new tab and can immediately access the corresponding functionality without having to first customize their personal settings themselves. It is recommended that you do not show tabs by setting appropriate permissions so that the users in your organization cannot see any of your changes until you are ready to make them available. You can create up to 25 custom tabs in the Enterprise Edition, and as many as you require in the Unlimited and Performance Editions. To create custom tabs for a custom object, navigate to Setup | Create | Tabs. Now, select the appropriate tab type and/or object from the available selections, as shown in the following screenshot: Renaming labels for standard tabs, standard objects, and standard fields Labels generally reflect the text that is displayed and presented to your users in the user interface and in reports within the Salesforce application. You can change the display labels of standard tabs, objects, fields, and other related user interface labels so they can reflect your company's terminology and business requirements better. For example, the Accounts tab and object can be changed to Clients; similarly, Opportunities to Deals, and Leads to Prospects. Once changed, the new label is displayed on all user pages. The Setup Pages and Setup Menu sections cannot be modified and do not include any renamed labels and continue. Here, the standard tab, object, and field reference continues to use the default, original labels. Also, the standard report names and views continue to use the default labels and are not renamed. To change standard tab, objects, and field labels, navigate to Setup | Customize | Tabs Names and Labels | Rename Tabs and Labels. Now, select a language, and then click on Edit to modify the tab names and standard field labels, as shown in the following screenshot: Click on Edit to select the tab that you wish to rename. Although the screen indicates that this is a change for the tab's name, this selection will also allow you to change the labels for the object and fields, in addition to the tab name. To change field labels, click through to step 2. Enter the new field labels. Here, we will rename Accounts tab to Clients. Enter the Singular and Plural names and then click on Next, as shown in the following screenshot: Only the following standard tabs and objects can be renamed: Accounts, Activities, Articles, Assets, Campaigns, Cases, Contacts, Contracts, Documents, Events, Ideas, Leads, Libraries, Opportunities, Opportunity Products, Partners, Price Books, Products, Quote Line Items, Quotes, Solutions, Tasks. Tabs such as HomeChatter, Forecasts, Reports, and Dashboards cannot be renamed. The following screenshot shows Standard Field available: Salesforce looks for the occurrence of the Account label and displays an auto-populated screen showing where the Account text will be replaced with Client. This auto-population of text is carried out for the standard tab, the standard object, and the standard fields. Review the replaced text, amend as necessary, and then click on Save, as shown in the following screenshot: After renaming, the new labels are automatically displayed on the tab, in reports, in dashboards, and so on. Some standard fields, such as Created By and Last Modified, are prevented from being renamed because they are audit fields that are used to track system information. You will, however, need to carry out the following additional steps to ensure the consistent renaming throughout the system as these may need manual updates: Check all list view names as they do not automatically update and will continue to show the original object name until you change them manually. Review standard report names and descriptions for any object that you have renamed. Check the titles and descriptions of any e-mail templates that contain the original object or field name, and update them as necessary. Review any other items that you have customized with the standard object or field name. For example, custom fields, page layouts, and record types may include the original tab or field name text that is no longer relevant. If you have renamed tabs, objects, or fields, you can also replace the Salesforce online help with a different URL. Your users can view this replaced URL whenever they click on any context-sensitive help link on an end-user page or from within their personal setup options. Creating custom objects Custom objects are database tables that allow you to store data specific to your organization on salesforce.com. You can use custom objects to extend Salesforce functionality or to build new application functionality. You can create up to 200 custom objects in the Enterprise Edition and 2000 in the Unlimited Edition. Once you have created a custom object, you can create a custom tab, custom-related lists, reports, and dashboards for users to interact with the custom object data. To create a custom object, navigate to Setup | Create | Objects. Now click on New Custom Object, or click on Edit to modify an existing custom object. The following screenshot shows the resulting screen:   On the Custom Object Definition Edit page, you can enter the following: Label: This is the visible name that is displayed for the object within the Salesforce CRM user interface and shown on pages, views, and reports, for example. Plural Label: This is the plural name specified for the object, which is used within the application in places such as reports and on tabs (if you create a tab for the object). Gender (language dependent): This field appears if your organization-wide default language expects gender. This is used for organizations where the default language settings are, for example, Spanish, French, Italian, and German, among many others. Your personal language preference setting does not affect whether the field appears or not. For example, if your organization's default language is English, but your personal language is French, you will not be prompted for gender when creating a custom object. Starts with a vowel sound: Use of this setting depends on your organization's default language and is a linguistic check to allow you to specify whether your label is to be preceded by an instead of a; for example, resulting in reference to the object as an Order instead of a Order. Object Name: This is a unique name used to refer to the object. Here, the Object Name field must be unique and can only contain underscores and alphanumeric characters. It must also begin with a letter, not contain spaces or two consecutive underscores, and not end with an underscore. Description: This is an optional description of the object. A meaningful description will help you explain the purpose of your custom objects when you are viewing them in a list. Context-Sensitive Help Setting: This defines what information is displayed when your users click on the Help for this Page context-sensitive help link from the custom object record home (overview), edit, and detail pages, as well as list views and related lists. The Help & Training link at the top of any page is not affected by this setting; it always opens Salesforce Help & Training window. Record Name: This is the name that is used in areas such as page layouts, search results, key lists, and related lists, as shown next. Data Type: This sets the type of field for the record name. Here, the data type can be either text or auto-number. If the data type is set to be Text, then when a record is created, users must enter a text value, which does not need to be unique. If the data type is set to be Auto Number, it becomes a read-only field, whereby new records are automatically assigned a unique number, as shown in the following screenshot: Display Format: This option, as shown in the preceding example, only appears when the Data Type field is set to Auto Number. It allows you to specify the structure and appearance of the Auto Number field. For example, {YYYY}{MM}-{000} is a display format that produces a four-digit year and a two-digit month prefix to a number with leading zeros padded to three digits. Example data output would include: 201203-001, 201203-066, 201203-999, 201203-1234. It is worth noting that although you can specify the number to be three digits, if the number of records created becomes over 999, the record will still be saved but the automatically incremented number becomes 1000, 1001, and so on. Starting Number: As described, Auto Number fields in Salesforce CRM are automatically incremented for each new record. Here, you must enter the starting number for the incremental count, which does not have to be set to start from one. Allow Reports: This setting is required if you want to include the record data from the custom object in any report or dashboard analytics. When a custom object has a relationship field associating it to a standard object, a new Report Type may appear in the standard report category. The new report type allows the user to create reports that relate the standard object to the custom object by selecting the standard object for the report type category instead of the custom object. Such relationships can be either a lookup or a master-detail. A new Report Type is created in the standard report category if the custom object is either the lookup object on the standard object or the custom object has a master-detail relationship with the standard object. Lookup relationships create a relationship between two records so that you can associate them with each other. Moreover, the master-detail relationship fields created are described in more detail later in this section. relationship between records where the master record controls certain behaviors of the detail record such as record deletion and security. When the custom object has a master-detail relationship with a standard object or is a lookup object on a standard object, a new report type will appear in the standard report category. The new report type allows the user to create reports that relate the standard object to the custom object, which is done by selecting the standard object for the report type category instead of the custom object. Allow Activities: This allows users to include tasks and events related to the custom object records, which appear as a related list on the custom object page. Track Field History: This enables the tracking of data-field changes on the custom object records, such as who changed the value of a field and when it was changed. Fields history tracking also stores the value of the field before and after the fields edit. This feature is useful for auditing and data-quality measurement, and is also available within the reporting tools. The field history data is retained for up to 18 months, and you can set field history tracking for a maximum of 20 fields for Enterprise, Unlimited, and Performance Editions. Allow in Chatter Groups: This setting allows your users to add records of this custom object type to Chatter groups. When enabled, records of this object type that are created using the group publisher are associated with the group, and also appear in the group record list. When disabled, records of this object type that are created using the group publisher are not associated with the group. Deployment Status: This indicates whether the custom object is now visible and available for use by other users. This is useful as you can easily set the status to In Development until you are happy for users to start working with the new object. Add Notes & Attachments: This setting allows your users to record notes and attach files to the custom object records. When this is specified, a related list with the New Note and Attach File buttons automatically appears on the custom object record page where your users can enter notes and attach documents. The Add Notes & Attachments option is only available when you create a new object. Launch the New Custom Tab Wizard: This starts the custom tab wizard after you save the custom object. The New Custom Tab Wizard option is only available when you create a new object. If you do not select Launch the New Custom Tab Wizard, you will not be able to create a tab in this step; however, you can create the tab later, as described in the section Custom tabs covered earlier in this article. When creating a custom object, a custom tab is not automatically created. Summary This article, thus, describes in detail the Salesforce CRM record storage features and user interface that can be customized, the mechanism to store data in Salesforce CRM, the relationship that exists between the profile, and the customizable features that the profile controls. Resources for Article: Further resources on this subject: Introducing Dynamics CRM [article] Understanding CRM Extendibility Architecture [article] Getting Dynamics CRM 2015 Data into Power BI [article]
Read more
  • 0
  • 0
  • 1833

article-image-creating-a-simple-level-select-screen
Gareth Fouche
16 Dec 2016
6 min read
Save for later

Creating a Simple Level Select Screen

Gareth Fouche
16 Dec 2016
6 min read
For many types of games, whether multiplayer FPS or 2D platformer, it is desirable to present the player with a list of levels they can play. Sonic the Hedgehog © Sega This tutorial will guide you through creating a simple level select screen in Unity. For the first step, we need to create some simple test levels for the player to select from. From the menu, select File | New Scene to create a new scene with a main Camera and Directional Light. Then, from the hierarchy view, select Create | 3D Object | Plane to place a ground plane. Select Create | 3D Object | Cube to place a cube in the scene. Copy and paste that cube a few times, arranging the cubes on the plane and positioning the camera until you have a basic level layout as follows: Save this Scene as “CubeWorld”, our 1st level. Create another scene and repeat the above process, but instead of cubes, place spheres. Save this scene as “SphereWorld”, our second game level: We will need preview images of each level for our level select screen. Take a screenshot of each scene, open up any image editor, paste your image, and resize/crop that image until it is 400 x 180 pixels. Do this for both the levels, save them as “CubeWorld.jpg” and “SphereWorld.jpg”, and then pull those images into your project. In the import settings, make sure to set the Texture Type for the images to Sprite (2D and UI): Now that we have the game levels, it’s time to create the Level Select scene. As before, create a new empty scene and name it “LevelSelectMenu”. This time, select Create | UI | Canvas. This will create the canvas object that is the root of our GUI. In an image editor, create a small 10 x 10 pixel image, fill it with black and save it as “Background.jpg”. Drag it into the project, setting its image settings, as before, to Sprite (2D and UI). Now, from the Create | UI menu, create an Image. Drag “Background.jpg” from the Project pane into the Image component’s Source Image field. Set the Width and the Height to 2000 pixels, this should be enough to cover the entire canvas. From the same UI menu, create a Text component. In the inspector, set the Width and the Height of that Text to 300 x 80 pixels. Under the Text property, enter “Select Level”, and then set the Font Size to 50 and Color to white: Using the transform controls, drag the text to the upper middle area of the screen. If you can’t see the Text, make sure it is positioned below the Image under Canvas in the Hierarchy view. Order matters; the top most child of Canvas will be rendered first, then the second, and so on. So, make sure your background image isn’t being drawn over your Text. Now, from the UI menu, create a Button again. Make this button 400 pixels wide and 180 pixels high. Drag “CubeWorld.jpg” into the Image component’s Source Image field from the Project pane. This will make it the button image: Edit the button Text to say “Cube World” and set the Font Size to 30. Change the Font Colour to be white. Now, in the inspector view, reposition the text to the bottom left corner of the button using the transform controls: Update the Button’s Color values as in the image below. These values will tint the button image in certain states. Normal is the default, Highlighted is for when the mouse is over the button, Pressed is for when the button is pressed, Disabled is for when the button is not interactable (the Interactable checkbox is unticked): Now duplicate the first button, but this time, use the “SphereWorld.jpg” image as the Source Image, and set the text to “Sphere World”. Using the transform controls, position the two buttons next to each other under the “Select Level” text on the canvas. It should look like this: If we run the app now, we’ll see this screen and be able to click on each level button, but nothing will happen. To actually load a level, we need to first create a new script. Right click in the Project view and select Create | C# Script. Name this script “LevelSelect”. Create a new GameObject in the scene, rename it “LevelSelectManager”, and drag the LevelSelect script onto that GameObject in the Hierarchy. Now, open up the script in an IDE and change the code to be as follows: What this script does is define a script, LevelSelect, which exposes a single function, LoadLevel(). LoadLevel() takes a string, that is, the level name and tells Unity to load that level (a Unity scene) by calling SceneManager.LoadScene(). However, we still need to actually call this function when the buttons are pressed. Back in Unity, go back to the CubeWorld button in the Hierarchy. Under the Button Script in the Inspector, there is an entry for “On Click ()” with a plus sign under it: Click the plus sign to add the event that will be called when the button is clicked. Once the event is added, we need to fill out the details that tell it which function to call on what scene GameObject. Find where it says “None (Object)” under “On Click ()”. Drag the “LevelSelectManager” GameObject from the Hierarchy view into that field. Then click the “No Function” dropdown, which will display a list of component classes matching the components on our “LevelSelectManager” GameObject. Choose “LevelSelect” (because that’s the script class our function is defined in) and then “LoadLevel (string)” to choose the function we wrote in C# previously. Now we just have to pass the level name string we want to load to that function. To do that, write “CubeWorld” (the name of the scene/level we want to load) in the empty field text box. Once you’re done, the “On Click ()” event should look like this: Now, repeat the process for the SphereWorld button as above, but instead of entering “CubeWorld” as the string to pass the LoadLevel function, enter “SphereWorld”. Almost done! Finally, save the “LevelSelectMenu” scene, and then click File | Build Settings. Make sure that all the three scenes are loaded into the “Scenes In Build” list. If they aren’t, drag the scenes into the list from the Project pane. Make sure that the “LevelSelect” scene is first so that when the app is run, it is the scene that will be loaded up first: It’s time to build and run your program! You should be greeted by the Level Select Menu, and, depending on which level you select, it’ll load the appropriate game level, either CubeWorld or SphereWorld. Now you can customize it further, adding more levels, making the level select screen look nicer with better graphical assets and effects, and, of course, adding actual gameplay to your levels. Have fun! About the author Gareth Fouche is a game developer. He can be found on Github @GarethNN.
Read more
  • 0
  • 2
  • 14775

article-image-r-and-its-diverse-possibilities
Packt
16 Dec 2016
11 min read
Save for later

R and its Diverse Possibilities

Packt
16 Dec 2016
11 min read
In this article by Jen Stirrup, the author of the book Advanced Analytics with R and Tableau, We will cover, with examples, the core essentials of R programming such as variables and data structures in R such as matrices, factors, vectors, and data frames. We will also focus on control mechanisms in R ( relational operators, logical operators, conditional statements, loops, functions, and apply) and how to execute these commands in R to get grips before proceeding to article that heavily rely on these concepts for scripting complex analytical operations. (For more resources related to this topic, see here.) Core essentials of R programming One of the reasons for R’s success is its use of variables. Variables are used in all aspects of R programming. For example, variables can hold data, strings to access a database, whole models, queries, and test results. Variables are a key part of the modeling process, and their selection has a fundamental impact on the usefulness of the models. Therefore, variables are an important place to start since they are at the heart of R programming. Variables In the following section we will deal with the variables—how to create variables and working with variables. Creating variables It is very simple to create variables in R, and to save values in them. To create a variable, you simply need to give the variable a name, and assign a value to it. In many other languages, such as SQL, it’s necessary to specify the type of value that the variable will hold. So, for example, if the variable is designed to hold an integer or a string, then this is specified at the point at which the variable is created. Unlike other programming languages, such as SQL, R does not require that you specify the type of the variable before it is created. Instead, R works out the type for itself, by looking at the data that is assigned to the variable. In R, we assign variables using an assignment variable, which is a less than sign (<) followed by a hyphen (-). Put together, the assignment variable looks like so: Working with variables It is important to understand what is contained in the variables. It is easy to check the content of the variables using the lscommand. If you need more details of the variables, then the ls.strcommand will provide you with more information. If you need to remove variables, then you can use the rm function. Data structures in R The power of R resides in its ability to analyze data, and this ability is largely derived from its powerful data types. Fundamentally, R is a vectorized programming language. Data structures in R are constructed from vectors that are foundational. This means that R’s operations are optimized to work with vectors. Vector The vector is a core component of R. It is a fundamental data type. Essentially, a vector is a data structure that contains an array where all of the values are the same type. For example, they could all be strings, or numbers. However, note that vectors cannot contain mixed data types. R uses the c() function to take a list of items and turns them into a vector. Lists R contains two types of lists: a basic list, and a named list. A basic list is created using the list() operator. In a named list, every item in the list has a name as well as a value. named lists are a good mapping structure to help map data between R and Tableau. In R, lists are mapped using the $ operator. Note, however, that the list label operators are case sensitive. Matrices Matrices are two-dimensional structures that have rows and columns. The matrices are lists of rows. It’s important to note that every cell in a matrix has the same type. Factors A factor is a list of all possible values of a variable in a string format. It is a special string type, which is chosen from a specified set of values known as levels. They are sometimes known as categorical variables. In dimensional modeling terminology, a factor is equivalent to a dimension, and the levels represent different attributes of the dimension. Note that factors are variables that can only contain a limited number of different values. Data frames The data frame is the main data structure in R. It’s possible to envisage the data frame as a table of data, with rows and columns. Unlike the list structure, the data frame can contain different types of data. In R, we use the data.frame() command in order to create a data frame. The data frame is extremely flexible for working with structured data, and it can ingest data from many different data types. Two main ways to ingest data into data frames involves the use of many data connectors, which connect to data sources such as databases, for example. There is also a command, read.table(), which takes in data. Data Frame Structure Here is an example, populated data frame. There are three columns, and two rows. The top of the data frame is the header. Each horizontal line afterwards holds a data row. This starts with the name of the row, and then followed by the data itself. Each data member of a row is called a cell. Here is an example data frame, populated with data: Example Data Frame Structure df = data.frame( Year=c(2013, 2013, 2013), Country=c("Arab World","Carribean States", "Central Europe"), LifeExpectancy=c(71, 72, 76)) As always, we should read out at least some of the data frame so we can double-check that it was set correctly. The data frame was set to the df variable, so we can read out the contents by simply typing in the variable name at the command prompt: To obtain the data held in a cell, we enter the row and column co-ordinates of the cell, and surround them by square brackets []. In this example, if we wanted to obtain the value of the second cell in the second row, then we would use the following: df[2, "Country"] We can also conduct summary statistics on our data frame. For example, if we use the following command: summary(df) Then we obtain the summary statistics of the data. The example output is as follows: You’ll notice that the summary command has summarized different values for each of the columns. It has identified Year as an integer, and produced the min, quartiles, mean, and max for year. The Country column has been listed, simply because it does not contain any numeric values. Life Expectancy is summarized correctly. We can change the Year column to a factor, using the following command: df$Year <- as.factor(df$Year) Then, we can rerun the summary command again: summary(df) On this occasion, the data frame now returns the correct results that we expect: As we proceed throughout this book, we will be building on more useful features that will help us to analyze data using data structures, and visualize the data in interesting ways using R. Control structures in R R has the appearance of a procedural programming language. However, it is built on another language, known as S. S leans towards functional programming. It also has some object-oriented characteristics. This means that there are many complexities in the way that R works. In this section, we will look at some of the fundamental building blocks that make up key control structures in R, and then we will move onto looping and vectorized operations. Logical operators Logical operators are binary operators that allow the comparison of values: Operator Description <  less than <= less than or equal to >  greater than >= greater than or equal to == exactly equal to != not equal to !x Not x x | y x OR y x & y x AND y isTRUE(x) test if X is TRUE For loops and vectorization in R Specifically, we will look at the constructs involved in loops. Note, however, that it is more efficient to use vectorized operations rather than loops, because R is vector-based. We investigate loops here, because they are a good first step in understanding how R works, and then we can optimize this understanding by focusing on vectorized alternatives that are more efficient. More information about control flows can be obtained by executing the command at the command line: Help?Control The control flow commands take decisions and make decisions between alternative actions. The main constructs are for, while, and repeat. For loops Let’s look at a for loop in more detail. For this exercise, we will use the Fisher iris dataset, which is installed along with R by default. We are going to produce summary statistics for each species of iris in the dataset. You can see some of the iris data by typing in the following command at the command prompt: head(iris) We can divide the iris dataset so that the data is split by species. To do this, we use the split command, and we assign it to the variable called IrisBySpecies: IrisBySpecies <- split(iris,iris$Species) Now, we can use a for loop in order to process the data in order to summarize it by species. Firstly, we will set up a variable called output, and set it to a list type. For each species held in the IrisBySpecies variable, we set it to calculate the minimum, maximum, mean, and total cases. It is then set to a data frame called output.df, which is printed out to the screen: output <- list() for(n in names(IrisBySpecies)){ ListData <- IrisBySpecies[[n]] output[[n]] <- data.frame(species=n, MinPetalLength=min(ListData$Petal.Length), MaxPetalLength=max(ListData$Petal.Length), MeanPetalLength=mean(ListData$Petal.Length), NumberofSamples=nrow(ListData)) output.df <- do.call(rbind,output) } print(output.df) The output is as follows: We used a for loop here, but they can be expensive in terms of processing. We can achieve the same end by using a function that uses a vector called Tapply. Tapply processes data in groups. Tapply has three parameters; the vector of data, the factor that defines the group, and a function. It works by extracting the group, and then applying the function to each of the groups. Then, it returns a vector with the results. We can see an example of tapply here, using the same dataset: output <- data.frame(MinPetalLength=tapply(iris$Petal.Length,iris$Species,min), MaxPetalLength=tapply(iris$Petal.Length,iris$Species,max), MeanPetalLength=tapply(iris$Petal.Length,iris$Species,mean), NumberofSamples=tapply(iris$Petal.Length,iris$Species,length)) print(output) This time, we get the same output as previously. The only difference is that by using a vectorized function, we have concise code that runs efficiently. To summarize, R is extremely flexible and it’s possible to achieve the same objective in a number of different ways. As we move forward through this book, we will make recommendations about the optimal method to select, and the reasons for the recommendation. Functions R has many functions that are included as part of the installation. In the first instance, let’s look to see how we can work smart by finding out what functions are available by default. In our last example, we used the split() function. To find out more about the split function, we can simply use the following command: ?split Or we can use: help(split) It’s possible to get an overview of the arguments required for a function. To do this, simply use the args command: args(split) Fortunately, it’s also possible to see examples of each function by using the following command: example(split) If you need more information than the documented help file about each function, you can use the following command. It will go and search through all the documentation for instances of the keyword: help.search("split") If you  want to search the R project site from within RStudio, you can use the RSiteSearch command. For example: RSiteSearch("split") Summary In this article, we have looked at various essential structures in working with R. We have looked at the data structures that are fundamental to using R optimally. We have also taken the view that structures such as for loops can often be done better as vectorized operations. Finally, we have looked at the ways in which R can be used to create functions in order to simply code. Resources for Article: Further resources on this subject: Getting Started with Tableau Public [article] Creating your first heat map in R [article] Data Modelling Challenges [article]
Read more
  • 0
  • 0
  • 2982

article-image-gathering-and-analyzing-stock-market-data-r-part-2
Erik Kappelman
15 Dec 2016
8 min read
Save for later

Gathering and analyzing stock market data with R, Part 2

Erik Kappelman
15 Dec 2016
8 min read
Welcome to the second installment of this series. The previous post covered collecting real-time stock market data using R. This second part looks at a few ways to analyze historical stock market data using R. If you are just interested in learning how to analyze historical data, the first blog isn’t necessary. The code accompanying these blogs is located here. To begin, we must first get some data. The lines of code below load the ‘quantmod’ library, a very useful R library when it comes to financial analysis, and then use quantmod to gather data on the list of stock symbols: library(quantmod) syms<-read.table("NYSE.txt",header=2,sep="t") smb<-grep("[A-Z]{4}",syms$Symbol,perl= F, value = T) getSymbols(smb) I find the getSymbols() function somewhat problematic for gathering data on multiple companies. This is because the function creates a separate dataframe for each company in the package’s ‘xts’ format. I think this would be more helpful if you were planning to use the quantmod tools for analysis. I enjoy using other types of tools, so the data needs to be changed somewhat, before I can analyze it: mat<- c() stocks<- c() stockList<- list() names<- c() for(i in1:length(smb)){ temp<-get(smb[i]) names<- c(names,smb[i]) stockList[[i]]<-as.numeric(getPrice(temp)) len<- length(attributes(temp)$index) if(len<ten01)next stocks<- c(stocks,smb[i]) temp2 <-temp[(len-ten00):len] vex<-as.numeric(getPrice(temp2)) mat<-rbind(mat,vex) } The code above loops through the dataframes that were created by the getSymbols() function. Using the get() function from the ‘base’ package, each symbol string is used to grab the symbol’s dataframe. The loop then does one or two more things to each stock’s dataframe. For all of the stocks, it records the stock’s symbol in a vector and adds a vector of prices to the growing list of stock data. If the stock data goes back at least one thousand trading days, then the last one thousand days of trading are added to a matrix. The reason for this distinction is that we will be looking at two methods of analysis. One requires all of the series to be the same length, and the other is length-agnostic. Series that are too short will not be analyzed using the first method. Check out the following script: names(stockList)<- names stock.mat<-as.matrix(mat) row.names(stock.mat)<- stocks colnames(stock.mat)<-as.character(index(temp2)) save(stock.mat,stockList,file="StockData.rda") rm(list =ls()) The above script names the data properly and saves the data to an R data file. The final line of code cleans the workspace because the getSymbols() function leaves quite a mess. The data is now in the correct format for us to begin our analysis. It is worth pointing out that what I am about to show won’t get you an A in most statistics or economics classes. I say this because I am going to take a very practical approach with little regard to the proper assumptions. Although these assumptions are important, when the need is an accurate forecast, it is easier to get away with models that are not entirely theoretically sound. This is because we are not trying to make arguments of causality or association; we are trying to guess the direction of the market: library(mclust) library(vars) load("StockData.rda") In this first example of analysis, I put forth a clustering-based Vector Autoregression (VAR) method of my own design. In order to do this, we must load the correct packages and load the data we just created: cl<-Mclust(stock.mat,G=1:9) stock.mat<-cbind(stock.mat,cl$classification) The first thing to do is identify the clusters that exist within the stock market data. In this case, we use a model-based clustering method. This method assumes that data is the result of picks from a set of random variable distributions. This allows the clusters to be based on the covariance of companies’ stock prices instead of just grouping together companies with similar nominal prices. The Mclust() function fits a model to the data that minimizes the Bayesian Information Criterion (BIC). You will likely have to restrict the number of clusters as one complaint about model-based clustering is a ‘more clusters are always better’ result. The data is separated into clusters to make using a VAR technique more computationally realistic. One of the nice things about VAR is how few assumptions must be met in order to include the time series in an analysis. Also, VAR regresses several time series against one another and themselves at the same time, which may capture more of the covariance needed to produce reliable forecasts. We are looking at over 1000 time series and this is too many to use VAR effectively, so the clustering is used to group the time series together to produce smaller VARs: cluster<-stock.mat[stock.mat[,ten02]==6,1:ten01] ts<-ts(t(cluster)) fit <-VAR(ts[1:(ten01-ten),],p =ten) preds<- predict(fit,n.ahead=ten) forecast<-preds$fcst$TEVA plot.ts(ts[950:ten01,8],ylim=c(36,54)) lines(y = forecast[,1], x =(50-9):50,col ="blue") lines(y = forecast[,2], x =(50-9):50,col ="red",lty=2) lines(y = forecast[,3], x =(50-9):50,col ="red",lty=2) The code above takes the time series that belong to the ‘6’ cluster and runs a VAR that looks back ten steps. We cut off the last ten days of data and use the VAR to predict these last ten days. The script then plots the predicted ten days against the actual ten days. This allows us to see if the predictions are functioning properly. The resulting plot shows that the predictions are not perfect but will probably work well enough: for(i in1:8){ assign(paste0("cluster.",i),stock.mat[stock.mat[,ten02]== i,1:ten01]) assign(paste0("ts.",i),ts(t(get(paste0("cluster.",i))))) temp<-get(paste0("ts.",i)) assign(paste0("fit.",i),VAR(temp, p =ten)) assign(paste0("preds.",i),predict(get(paste0("fit.",i)),n.ahead=ten)) } stock.mat<-cbind(stock.mat,0) for(j in1:8){ pred.vec<-c() temp<-get(paste0("preds.",j)) for(i intemp$fcst){ cast<-temp$fcst[1] cast<- cast[[1]] cast<- cast[ten,] pred.vec<-c(pred.vec,cast[1]) } stock.mat[stock.mat[,ten02]== j,ten03]<-pred.vec } The loops above perform a VAR on each of the 8 clusters with more than one member. After these VARs are performed, a ten-day forecast is carried out. The value of each stock at the end of the ten-day forecast is then appended onto the end of the stock data matrix: stock.mat<-stock.mat[stock.mat[,ten02]!=9,] stock.mat[,ten04]<-(stock.mat[,ten03]-stock.mat[,ten01])/stock.mat[,ten01]*ten0 stock.mat<-stock.mat[order(-stock.mat[,ten04]),] stock.mat[1:ten,ten04] rm(list =ls()) The final lines of code calculate the percentage change in each stock forecasted after 10days and then display the top 10stocks in terms of forecasted percentage change. The workspace is then cleared: load("StockData.rda") library(forecast) forecasts<- c() names<- c() for(i in1:length(stockList)){ mod<-auto.arima(stockList[[i]]) cast<- forecast(mod) cast<-cast$mean[ten] temp<- c(as.numeric(stockList[[i]][length(stockList[[i]])]),as.numeric(cast)) forecasts<-rbind(forecasts,temp) names<- c(names,names(stockList[i])) } forecasts<- matrix(ncol=2,forecasts) forecasts<-cbind(forecasts,(forecasts[,2]- forecasts[,1])/forecasts[,1]*ten0) colnames(forecasts)<- c("Price","Forecast","% Change") row.names(forecasts)<- names forecasts<- forecasts[order(-forecasts[,3]),] rm(list =ls()) The final bit of code is simpler. Using the ‘forecast’ package’s auto.arima() function,we fit an ARIMA model to each stock in our stockList. The auto.arima() function is a must-have for forecasters using R. This function fits an ARIMA model to your data with the best value of some measure of statistical accuracy. The default is the corrected Akaike Information Criterion (ACCc), which will work fine for our purposes. Once the forecasts are complete, this script also prints the top 10stocks in terms of percentage change over a ten-day forecast. These blogs have discussed how to gather and analyze stock market data using R. I hope they have been informative and will help you with data analysis in the future. About the author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 4397

article-image-uitableview-touch
Packt
14 Dec 2016
24 min read
Save for later

UITableView Touch Up

Packt
14 Dec 2016
24 min read
 In this article by Donny Wals, from the book Mastering iOS 10 Programming, we will go through UITableView touch up. Chances are that you have built a simple app before. If you have, there's a good probability that you have used UITableView. The UITableView is a core component in many applications. Virtually all applications that display a list of some sort make use of UITableView. Because UITableView is such an important component in the world of iOS I want you to dive in to it straight away. You may or may not have looked at UITableView before but that's okay. You'll be up to speed in no time and you'll learn how this component achieves that smooth 60 frames per seconds (fps) scrolling that users know and love. If your app can maintain a steady 60,fps your app will feel more responsive and scrolling will feel perfectly smooth to users which is exactly what you want. We'll also cover new UITableView features that make it even easier to optimize your table views. In addition to covering the basics of UITableView, you'll learn how to make use of Contacts.framework to build an application that shows a list of your users' contacts. This is similar to what the native Contacts app does on iOS. The contacts in the UITableView component will be rendered in a custom cell. You will learn how to create such a cell, using Auto Layout. Auto Layout is a technique that will be covered throughout this book because it's an important part of every iOS developer's tool belt. If you haven't used Auto Layout before, that's okay. This article will cover a few basics, and the layout is relatively simple, so you can gradually get used to it as we go. To sum it all up, this article covers: Configuring and displaying UITableView Fetching a user's contacts through Contacts.framework UITableView delegate and data source (For more resources related to this topic, see here.) Setting up the User Interface (UI) Every time you start a new project in Xcode, you have to pick a template for your application. These templates will provide you with a bit of boiler plate code or sometimes they will configure a very basic layout for you. Throughout this book, the starting point will always be the Single View Application template. This template will provide you with a bare minimum of boilerplate code. This will enable you to start from scratch every time, and it will boost your knowledge of how the Xcode-provided templates work internally. In this article, you'll create an app called HelloContacts. This is the app that will render your user's contacts in a UITableView. Create a new project by selecting File | New | Project. Select the Single View template, give your project a name (HelloContacts), and make sure you select Swift as the language for your project. You can uncheck all CoreData- and testing-related checkboxes; they aren't of interest right now. Your configuration should resemble the following screenshot: Once you have your app configured, open the Main.storyboard file. This is where you will create your UITableView and give it a layout. Storyboards are a great way for you to work on an app and have all of the screens in your app visualized at once. If you have worked with UITableView before, you may have used UITableViewController. This is a subclass of UIViewController that has a UITableView set as its view. It also abstracts away some of the more complex concepts of UITableView that we are interested in. So, for now, you will be working with the UIViewController subclass that holds UITableView and is configured manually. On the right hand side of the window, you'll find the Object Library. The Object Library is opened by clicking on the circular icon that contains a square in the bottom half of the sidebar.  In the Object Library, look for a UITableView. If you start typing the name of what you're looking for in the search box, it should automatically be filtered out for you. After you locate it, drag it into your app's view. Then, with UITableView selected, drag the white squares to all corners of the screen so that it covers your entire viewport. If you go to the dynamic viewport inspector at the bottom of the screen by clicking on the current device name as shown in the following screenshot and select a larger device such as an iPad or a smaller device such as the iPhone SE, you will notice that UITableView doesn't cover the viewport as nicely. On smaller screens, the UITableView will be larger than the viewport. On larger screens, UITableView simply doesn't stretch: This is why we will use Auto Layout. Auto Layout enables you to create layouts that will adapt to different viewports to make sure it looks good on all of the devices that are currently out there. For UITableView, you can pin the edges of the table to the edges of the superview, which is the view controller's main view. This will make sure the table stretches or shrinks to fill the screen at all times. Auto Layout uses constraints to describe layouts. UITableView should get some constraints that describe its relation to the edges of your view controller. The easiest way to add these constraints is to let Xcode handle it for you. To do this, switch the dynamic viewport inspector back to the view you initially selected. First, ensure UITableView properly covers the entire viewport, and then click on the Resolve Auto Layout Issues button at the bottom-right corner of the screen and select Reset to Suggested Constraints: This button automatically adds the required constraints for you. The added constraints will ensure that each edge of UITableView sticks to the corresponding edge of its superview. You can manually inspect these constraints in the Document Outline on the left-hand side of the Interface Builder window. Make sure that everything works by changing the preview device in the dynamic viewport inspector again. You should verify that no matter which device you choose now, the table will stretch or shrink to cover the entire view at all times. Now that you have set up your project with UITableView added to the interface and the constraints have been added, it's time to start writing some code. The next step is to use Contacts.framework to fetch some contacts from your user's address book. Fetching a user's contacts In the introduction for this article, it was mentioned that we would use Contacts.framework to fetch a user's contacts list and display it in UITableView. Before we get started, we need to be sure we have access to the user's address book. In iOS 10, privacy is a bit more restrictive than it was earlier. If you want to have access to a user's contacts, you need to specify this in your application's Info.plist file. If you fail to specify the correct keys for the data your application uses, it will crash without warning. So, before attempting to load your user's contacts, you should take care of adding the correct key to Info.plist. To add the key, open Info.plist from the list of files in the Project Navigator on the left and hover over Information Property List at the top of the file. A plus icon should appear, which will add an empty key with a search box when you click on it. If you start typing Privacy – contacts, Xcode will filter options until there is just one left, that is, the key for contact access. In the value column, fill in a short description about what you are going to use this access for. In our app, something like read contacts to display them in a list should be sufficient. Whenever you need access to photos, Bluetooth, camera, microphone, and more, make sure you check whether your app needs to specify this in its Info.plist. If you fail to specify a key that's required, your app will crash and will not make it past Apple's review process. Now that you have configured your app to specify that it wants to be able to access contact data, let's get down to writing some code. Before reading the contacts, you'll need to make sure the user has given permission to access contacts. You'll have to check this first, after which the code should either fetch contacts or it should ask the user for permission to access the contacts. Add the following code to ViewController.swift. After doing so, we'll go over what this code does and how it works: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() let store = CNContactStore() if CNContactStore.authorizationStatus(for: .contacts) == .notDetermined { store.requestAccess(for: .contacts, completionHandler: {[weak self] authorized, error in if authorized { self?.retrieveContacts(fromStore: store) } }) } else if CNContactStore.authorizationStatus(for: .contacts) == .authorized { retrieveContacts(fromStore: store) } } func retrieveContacts(fromStore store: CNContactStore) { let keysToFetch = [CNContactGivenNameKey as CNKeyDescriptor, CNContactFamilyNameKey as CNKeyDescriptor, CNContactImageDataKey as CNKeyDescriptor, CNContactImageDataAvailableKey as CNKeyDescriptor] let containerId = store.defaultContainerIdentifier() let predicate = CNContact.predicateForContactsInContainer(withIdentifier: containerId) let contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) print(contacts) } } In the viewDidLoad method, we will get an instance of CNContactStore. This is the object that will access the user's contacts database to fetch the results you're looking for. Before you can access the contacts, you need to make sure that the user has given your app permission to do so. First, check whether the current authorizationStatus is equal to .notDetermined. This means that we haven't asked permission yet and it's a great time to do so. When asking for permission, we pass a completionHandler. This handler is called a closure. It's basically a function without a name that gets called when the user has responded to the permission request. If your app is properly authorized after asking permission, the retrieveContacts method is called to actually perform the retrieval. If the app already had permission, we'll call retrieveContacts right away. Completion handlers are found throughout the Foundation and UIKit frameworks. You typically pass them to methods that perform a task that could take a while and is performed parallel to the rest of your application so the user interface can continue running without waiting for the result. A simplified implementation of such a function could look like this: func doSomething(completionHandler: Int -> Void) { // perform some actions
 var resultingValue = theResultOfSomeAction() completionHandler(resultingValue)
 } You'll notice that actually calling completionHandler looks identical to calling an ordinary function or method. The idea of such a completion handler is that we can specify a block of code, a closure, that is supposed to be executed at a later time. For example, after performing a task that is potentially slow. You'll find plenty of other examples of callback handlers and closures throughout this book as it's a common pattern in programming. The retrieveContacts method in ViewController.swift is responsible for actually fetching the contacts and is called with a parameter named store. It's set up like this so we don't have to create a new store instance since we already created one in viewDidLoad. When fetching a list of contacts, you use a predicate. We won't go into too much detail on predicates and how they work yet. The main goal of the predicate is to establish a condition to filter the contacts database on. In addition to a predicate, you also provide a list of keys your code wants to fetch. These keys represent properties that a contact object can have. They represent data, such as e-mail, phone numbers, names, and more. In this example, you only need to fetch a contact's given name, family name, and contact image. To make sure the contact image is available at all, there's a key request for that as well. When everything is configured, a call is made to unifiedContacts(matching:, keysToFetch:). This method call can throw an error, and since we're currently not interested in the error, try! is used to tell the compiler that we want to pretend the call can't fail and if it does, the application should crash. When you're building your own app, you might want to wrap this call in do {} catch {} block to make sure that your app doesn't crash if errors occur. If you run your app now, you'll see that you're immediately asked for permission to access contacts. If you allow this, you will see a list of contacts printed in the console. Next, let's display some content information in the contacts table! Creating a custom UITableViewCell for our contacts To display contacts in your UITableView, you will need to set up a few more things. First and foremost, you'll need to create a UITableViewCell that displays contact information. To do this, you'll create a custom UITableViewCell by creating a subclass. The design for this cell will be created in Interface Builder, so you're going to add @IBOutlets in your UITableViewCell subclass. These @IBOutlets are the connection points between Interface Builder and your code. Designing the contact cell The first thing you need to do is drag a UITableViewCell out from the Object Library and drop it on top of UITableView. This will add the cell as a prototype. Next, drag out UILabel and a UIImageView from the Object Library to the newly added UITableViewCell, and arrange them as they are arranged in the following figure. After you've done this, select both UILabel and UIImage and use the Reset to Suggested Constraints option you used earlier to lay out UITableView. If you have both the views selected, you should also be able to see the blue lines that are visible in following screenshot: These blue lines represent the constraints that were added to lay out your label and image. You can see a constraint that offsets the label from the left side of the cell. However, there is also a constraint that spaces the label and the image. The horizontal line through the middle of the cell is a constraint that vertically centers the label and image inside of the cell. You can inspect these constraints in detail in the Document Outline on the right. Now that our cell is designed, it's time to create a custom subclass for it and hook up @IBOutlets. Creating the cell subclass To get started, create a new file (File | New | File…) and select a Cocoa Touch file. Name the file ContactTableViewCell and make sure it subclasses UITableViewCell, as shown in the following screenshot: When you open the newly created file, you'll see two methods already added to the template for ContactTableViewCell.swift: awakeFromNib and setSelected(_:animated:). The awakeFromNib method is called the very first time this cell is created; you can use this method to do some initial setup that's required to be executed only once for your cell. The other method is used to customize your cell when a user taps on it. You could, for instance, change the background color or text color or even perform an animation. For now, you can delete both of these methods and replace the contents of this class with the following code: @IBOutlet var nameLabel: UILabel! @IBOutlet var contactImage: UIImageView! The preceding code should be the entire body of the ContactTableViewCell class. It creates two @IBOutlets that will allow you to connect your prototype cell with so that you can use them in your code to configure the contact's name and image later. In the Main.storyboard file, you should select your cell, and in the Identity Inspector on the right, set its class to ContactTableViewCell (as shown in the following screenshot). This will make sure that Interface Builder knows which class it should use for this cell, and it will make the @IBOutlets available to Interface Builder. Now that our cell has the correct class, select the label that will hold the contact's name in your prototype cell and click on Connections Inspector. Then, drag a new referencing outlet from the Connections Inspector to your cell and select nameLabel to connect the UILabel in the prototype cell to @IBOutlet in the code (refer to the following screenshot). Perform the same steps for UIImageView and select the contactImage option instead of nameLabel. The last thing we need to do is provide a reuse identifier for our cell. Click on Attributes Inspector after selecting your cell. In Attributes Inspector, you will find an input field for the Identifier. Set this input field to ContactTableViewCell. The reuse identifier is the identifier that is used to inform the UITableView about the cell it should retrieve when it needs to be created. Since the custom UITableViewCell is all set up now, we need to make sure UITableView is aware of the fact that our ViewController class will be providing it with the required data and cells. Displaying the list of contacts When you're implementing UITableView, it's good to be aware of the fact that you're actually working with a fairly complex component. This is why we didn't pick a UITableViewController at the beginning of this article. UITableViewController does a pretty good job of hiding the complexities of UITableView from thedevelopers. The point of this article isn't just to display a list of contacts; it's purpose is also to introduce some advanced concepts about a construct that you might have seen before, but never have been aware of. Protocols and delegation  Throughout the iOS SDK and the Foundation framework the delegate design pattern is used. Delegation provides a way for objects to have some other object handle tasks on their behalf. This allows great decoupling of certain tasks and provides a powerful way to allow communication between objects. The following image visualizes the delegation pattern for a UITableView component and its UITableViewDataSource: The UITableView uses two objects that help in the process of rendering a list. One is called the delegate, the other is called the data source. When you use a UITableView, you need to explicitly  configure the data source and delegate properties. At runtime, the UITableView will call methods on its delegate and data source in order to obtain information about cells, handle interactions and more. If you look at the documentation for the UITableView delegate property it will tell you that its type is UITableViewDelegate?. This means that the delegate's type is UITableViewDelegate. The question mark indicates that this value could be nil; we call this an Optional. The reason for the delegate to be Optional is that it might not ever be set at all. Diving deeper into what this UITableViewDelegate is exactly, you'll learn that it's actually a protocol and not a class or struct. A protocol provides a set of properties and/or methods that any object that conforms to (or adopts) this protocol must implement. Sometimes a protocol will provide optional methods, such as UITableViewDelegate does. If this is the case, we can choose which delegate methods we want to implement and which method we want to omit. Other protocols mandatory methods. The UITableViewDataSource has a couple of mandatory methods to ensure that a data source is able to provide UITableView with the minimum amount of information needed in order to render the cells you want to display. If you've never heard of delegation and protocols before, you might feel like this is all a bit foreign and complex. That's okay; throughout this book you'll gain a deeper understanding of protocols and how they work. In particular, the next section, where we'll cover swift and protocol-oriented programming, should provide you with a very thorough overview of what protocols are and how they work. For now, it's important to be aware that a UITableView always asks another object for data through the UITableViewDataSource protocol and their interactions are handled though the UITableViewDelegate. If you were to look at what UITableView does when it's rendering contents it could be dissected like this: UITableView needs to reload the data. UITableView checks whether it has a delegate; it asks the dataSource for the number of sections in this table. Once the delegate responds with the number of sections, the table view will figure out how many cells are required for each section. This is done by asking the dataSource for the number of cells in each section. Now that the cell knows the amount of content it needs to render, it will ask its data source for the cells that it should display. Once the data source provides the required cells based on the number of contacts, the UITableView will request display the cells one by one. This process is a good example of how UITableView uses other objects to provide data on its behalf. Now that you know how the delegation works for UITableView, it's about time you start implementing this in your own app. Conforming to the UITableViewDataSource and UITableViewDelegate protocol In order to specify the UITableView's delegate and data source, the first thing you need to do is to create an @IBOutlet for your UITableView and connect it to ViewController.swift. Add the following line to your ViewController, above the viewDidLoad method. @IBOutlet var tableView: UITableView! Now, using the same technique as before when designing UITableViewCell, select the UITableView in your Main.storyboard file and use the Connections Inspector to drag a new outlet reference to the UITableView. Make sure you select the tableView property and that's it. You've now hooked up your UITableView to the ViewController code. To make the ViewController code both the data source and the delegate for UITableView, it will have to conform to the UITableViewDataSource and UITableViewDelegate protocols. To do this, you have to add the protocols you want to conform to your class definition. The protocols are added, separated by commas, after the superclass. When you add the protocols to the ViewController, it should look like this: class ViewController: UIViewController, UITableViewDataSource, UITableViewDelegate { // class body
 } Once you have done this, you will have an error in your code. That's because even though your class definition claim to implement these protocols, you haven't actually implemented the required functionality yet. If you look at the errors Xcode is giving you, it becomes clear that there's two methods you must implement. These methods are tableView(_:numberOfRowsInSection:) and tableView(_:cellForRowAt:). So let's fix the errors by adjusting our code a little bit in order to conform to the protocols. This is also a great time to refactor the contacts fetching a little bit. You'll want to access the contacts in multiple places so that the list should become an instance variable. Also, if you're going to create cells anyway, you might as well configure them to display the correct information right away. To do so, perform the following code: var contacts = [CNContact]() // … viewDidLoad // … retrieveContacts func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return contacts.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "contactCell") as! ContactTableViewCell let contact = contacts[indexPath.row] cell.nameLabel.text = "(contact.givenName) (contact.familyName)" if let imageData = contact.imageData where contact.imageDataAvailable { cell.contactImage.image = UIImage(data: imageData) } return cell }  The preceding code is what's needed to conform to the UITableViewDataSource protocol. Right below the @IBOutlet of your UITableView, a variable is declared that will hold the list of contacts. The following code snippet was also added to the ViewController: func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return contacts.count }  This method is called by the UITableView to determine how many cells it will have to render. This method just returns the total number of contacts that's in the contacts list. You'll notice that there's a section parameter passed to this method. That's because a UITableView can contain multiple sections. The contacts list only has a single section; if you have data that contains multiple sections, you should also implement the numberOfSections(in:) method. The second method we added was: func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "contactCell") as! ContactTableViewCell let contact = contacts[indexPath.row] cell.nameLabel.text = "(contact.givenName) (contact.familyName)" if let imageData = contact.imageData where contact.imageDataAvailable { cell.contactImage.image = UIImage(data: imageData) } return cell }  This method is used to get an appropriate cell for our UITableView to display. This is done by calling dequeueReusableCell(withIdentifier:) on the UITableView instance that's passed to this method. This is because UITableView can reuse cells that are currently off screen. This is a performance optimization that allows UITableView to display vast amounts of data without becoming slow or consuming big chunks of memory. The return type of dequeueReusableCell(withIdentifier:) is UITableViewCell, and our custom outlets are not available on this class. This is why we force cast the result from that method to ContactTableViewCell. Force casting to your own subclass will make sure that the rest of your code has access to your nameLabel and contactImage. Casting objects will convert an object from one class or struct to another. This usually only works correctly when you're casting from a superclass to a subclass like we're doing in our example. Casting can fail, so force casting is dangerous and should only be done if you want your app to crash or consider it a programming error in case the cast fails. We also grab a contact from the contacts array that corresponds to the current row of indexPath. This contact is then used to assign all the correct values to the cell and then the cell is returned. This is all the setup needed to make your UITableView display the cells. Yet, if we build and run our app, it doesn't work! A few more changes will have to be made for it to do so. Currently, the retrieveContacts method does fetch the contacts for your user, but it doesn't update the contacts variable in ViewController. Also, the UITableView won't know that it needs to reload its data unless it's told to. Currently, the last few lines of retrieveContacts will look like this: let contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) print(contacts) Let's update these lines to the following code: contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) tableView.reloadData() Now, the result of fetching contacts is assigned to the instance variable that's declared at the top of your ViewController. After doing that, we tell the tableView to reload its data, so it will go through the delegate methods that provide the cell count and cells again. Lastly, the UITableView doesn't know that the ViewControler instance will act as both the dataSource and the delegate. So, you should update the viewDidLoad method to assign the UITableView's delegate and dataSource properties. Add the following lines to the end of the viewDidLoad method: tableView.dataSource = self tableView.delegate = self If you build and run it now, your app works! If you're running it in the simulator or you haven't assigned images to your contacts, you won't see any images. If you'd like to assign some images to the contacts in the simulator, you can drag your images into the simulator to add them to the simulator's photo library. From there, you can add pictures to contacts just as you would on a real device. However, if you have assigned images to some of your contacts you will see their images appear. You can now scroll through all of your contacts, but there seems to be an issue. When you're scrolling down your contacts list, you might suddenly see somebody else's photo next to the name of a contact that has no picture! This is actually a performance optimization. Summary Your contacts app is complete for now. We've already covered a lot of ground on the way to iOS mastery. We started by creating a UIViewController that contains a UITableView. We used Auto Layout to pin the edges of the UITableView to the edges of main view of ViewController. We also explored the Contacts.framework and understood how to set up our app so it can access the user's contact data. Resources for Article:  Further resources on this subject: Offloading work from the UI Thread on Android [article] Why we need Design Patterns? [article] Planning and Structuring Your Test-Driven iOS App [article]
Read more
  • 0
  • 0
  • 41541
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-provision-iaas-terraform
Packt
14 Dec 2016
9 min read
Save for later

Provision IaaS with Terraform

Packt
14 Dec 2016
9 min read
 In this article by Stephane Jourdan and Pierre Pomes, the authors of Infrastructure as Code (IAC) Cookbook, the following sections will be covered: Configuring the Terraform AWS provider Creating and using an SSH key pair to use on AWS Using AWS security groups with Terraform Creating an Ubuntu EC2 instance with Terraform (For more resources related to this topic, see here.) Introduction A modern infrastructure often usesmultiple providers (AWS, OpenStack, Google Cloud, Digital Ocean, and many others), combined with multiple external services (DNS, mail, monitoring, and others). Many providers propose their own automation tool, but the power of Terraform is that it allows you to manage it all from one place, all using code. With it, you can dynamically create machines at two IaaS providers depending on the environment, register their names at another DNS provider, and enable monitoring at a third-party monitoring company, while configuring the company GitHub account and sending the application logs to an appropriate service. On top of that, it can delegate configuration to those who do it well (configuration management tools such as Chef, Puppet, and so on),all with the same tool. The state of your infrastructure is described, stored, versioned, and shared. In this article, we'll discover how to use Terraform to bootstrap a fully capable infrastructure on Amazon Web Services (AWS), deploying SSH key pairs and securing IAM access keys. Configuring the Terraform AWS provider We can use Terraform with many IaaS providers such as Google Cloud or Digital Ocean. Here we'll configure Terraform to be used with AWS. For Terraform to interact with an IaaS, it needs to have a provider configured. Getting ready To step through this section, you will need the following: An AWS account with keys A working Terraform installation An empty directory to store your infrastructure code An Internet connection How to do it… To configure the AWS provider in Terraform, we'll need the following three files: A file declaring our variables, an optional description, and an optional default for each (variables.tf) A file setting the variables for the whole project (terraform.tfvars) A provider file (provider.tf) Let's declare our variables in the variables.tf file. We can start by declaring what's usually known as the AWS_DEFAULT_REGION, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY environment variables: variable "aws_access_key" { description = "AWS Access Key" } variable "aws_secret_key" { description = "AWS Secret Key" } variable "aws_region" { default = "eu-west-1" description = "AWS Region" } Set the two variables matching the AWS account in the terraform.tfvars file. It's not recommended to check this file into source control: it's better to use an example file instead (that is: terraform.tfvars.example). It's also recommended that you use a dedicated Terraform user for AWS, not the root account keys: aws_access_key = "< your AWS_ACCESS_KEY >" aws_secret_key = "< your AWS_SECRET_KEY >" Now, let's tie all this together into a single file—provider.tf: provider "aws" { access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" region = "${var.aws_region}" } Apply the following Terraform command: $ terraform apply Apply complete! Resources: 0 added, 0 changed, 0 destroyed. It only means the code is valid, not that it can really authenticate with AWS (try with a bad pair of keys). For this, we'll need to create a resource on AWS. You now have a new file named terraform.tfstate that has been created at the root of your repository. This file is critical: it's the stored state of your infrastructure. Don't hesitate to look at it, it's a text file. How it works… This first encounter with HashiCorp Configuration Language (HCL), the language used by Terraform, looks pretty familiar: we've declared variables with an optional description for reference. We could have declared them simply with the following: variable "aws_access_key" { } All variables are referenced to use the following structure: ${var.variable_name} If the variable has been declared with a default, as our aws_region has been declared with a default of eu-west-1, this value will be used if there's no override in the terraform.tfvars file. What would have happened if we didn't provide a safe default for our variable? Terraform would have asked us for a value when executed: $ terraform apply var.aws_region AWS Region Enter a value: There's more… We've used values directly inside the Terraform code to configure our AWS credentials. If you're already using AWS on the command line, chances are you already have a set of standard environment variables: $ echo ${AWS_ACCESS_KEY_ID} <your AWS_ACCESS_KEY_ID> $ echo ${AWS_SECRET_ACCESS_KEY} <your AWS_SECRET_ACCESS_KEY> $ echo ${AWS_DEFAULT_REGION} eu-west-1 If not, you can simply set them as follows: $ export AWS_ACCESS_KEY_ID="123" $ export AWS_SECRET_ACCESS_KEY="456" $ export AWS_DEFAULT_REGION="eu-west-1" Then Terraform can use them directly, and the only code you have to type would be to declare your provider! That's handy when working with different tools. The provider.tffile will then look as simple as this: provider "aws" { } Creating and using an SSH key pair to use on AWS Now we have our AWS provider configured in Terraform, let's add a SSH key pair to use on a default account of the virtual machines we intend to launch soon. Getting ready To step through this section, you will need the following: A working Terraform installation An AWS provider configured in Terraform Generate a pair of SSH keys somewhere you remember. An example can be under the keys folder at the root of your repo: $ mkdir keys $ ssh-keygen -q -f keys/aws_terraform -C aws_terraform_ssh_key -N '' An Internet connection How to do it… The resource we want for this is named aws_key_pair. Let's use it inside a keys.tf file, and paste the public key content: resource "aws_key_pair""admin_key" { key_name = "admin_key" public_key = "ssh-rsa AAAAB3[…]" } This will simply upload your public key to your AWS account under the name admin_key: $ terraform apply aws_key_pair.admin_key: Creating... fingerprint: "" =>"<computed>" key_name: "" =>"admin_key" public_key: "" =>"ssh-rsa AAAAB3[…]" aws_key_pair.admin_key: Creation complete Apply complete! Resources: 1 added, 0 changed, 0 destroyed. If you manually navigate to your AWS account, under EC2 |Network & Security | Key Pairs, you'll now find your key: Another way to use our key with Terraform and AWS would be to read it directly from the file, and that would show us how to use file interpolation with Terraform. To do this, let's declare a new empty variable to store our public key in variables.tf: variable "aws_ssh_admin_key_file" { } Initialize the variable to the path of the key in terraform.tfvars: aws_ssh_admin_key_file = "keys/aws_terraform" Now let's use it in place of our previous keys.tf code, using the file() interpolation: resource "aws_key_pair""admin_key" { key_name = "admin_key" public_key = "${file("${var.aws_ssh_admin_key_file}.pub")}" } This is a much clearer and more concise way of accessing the content of the public key from the Terraform resource. It's also easier to maintain, as changing the key will only require to replace the file and nothing more. How it works… Our first resource, aws_key_pair takes two arguments (a key name and the public key content). That's how all resources in Terraform work. We used our first file interpolation, using a variable, to show how to use a more dynamic code for our infrastructure. There's more… Using Ansible, we can create a role to do the same job. Here's how we can manage our EC2 key pair using a variable, under the name admin_key. For simplification, we're using here the three usual environment variables—AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION: Here's a typical Ansible file hierarchy: ├── keys │ ├── aws_terraform │ └── aws_terraform.pub ├── main.yml └── roles └── ec2_keys └── tasks └── main.yml In the main file (main.yml), let's declare that our host (localhost) will apply the role dedicated to manage our keys: --- - hosts: localhost roles: - ec2_keys In the ec2_keys main task file, create the EC2 key (roles/ec2_keys/tasks/main.yml): --- - name: ec2 admin key ec2_key: name: admin_key key_material: "{{ item }}" with_file: './keys/aws_terraform.pub' Execute the code with the following command: $ ansible-playbook -i localhost main.yml TASK [ec2_keys : ec2 admin key] ************************************************ ok: [localhost] => (item=ssh-rsa AAAA[…] aws_terraform_ssh) PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0   Using AWS security groups with Terraform Amazon's security groups are similar to traditional firewalls, with ingress and egress rules applied to EC2 instances. These rules can be updated on-demand. We'll create an initial security group allowing ingress Secure Shell (SSH) traffic only for our own IP address, while allowing all outgoing traffic. Getting ready To step through this section, you will need the following: A working Terraform installation An AWS provider configured in Terraform An Internet connection How to do it… The resource we're using is called aws_security_group. Here's the basic structure: resource "aws_security_group""base_security_group" { name = "base_security_group" description = "Base Security Group" ingress { } egress { } } We know we want to allow inbound TCP/22 for SSH only for our own IP (replace 1.2.3.4/32 by yours!), and allow everything outbound. Here's how it looks: ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["1.2.3.4/32"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } You can add a Name tag for easier reference later: tags { Name = "base_security_group" } Apply this and you're good to go: $ terraform apply aws_security_group.base_security_group: Creating... […] aws_security_group.base_security_group: Creation complete Apply complete! Resources: 1 added, 0 changed, 0 destroyed. You can see your newly created security group by logging into the AWS Console and navigating to EC2 Dashboard|Network & Security|Security Groups: Another way of accessing the same AWS console information is through the AWS command line: $ aws ec2 describe-security-groups --group-names base_security_group {...} There's more… We can achieve the same result using Ansible. Here's the equivalent of what we just did with Terraform in this section: --- - name: base security group ec2_group: name: base_security_group description: Base Security Group rules: - proto: tcp from_port: 22 to_port: 22 cidr_ip: 1.2.3.4/32 Summary In this article, you learnedhow to configure the Terraform AWS provider, create and use an SSH key pair to use on AWS, and use AWS security groups with Terraform. Resources for Article: Further resources on this subject: Deploying Highly Available OpenStack [article] Introduction to Microsoft Azure Cloud Services [article] Concepts for OpenStack [article]
Read more
  • 0
  • 0
  • 16753

article-image-sql-tuning-enhancements-oracle-12c
Packt
13 Dec 2016
13 min read
Save for later

SQL Tuning Enhancements in Oracle 12c

Packt
13 Dec 2016
13 min read
Background Performance Tuning is one of the most critical area of Oracle databases and having a good knowledge on SQL tuning helps DBAs in tuning production databases on a daily basis. Over the years Oracle optimizer has gone through several enhancements and each release presents a best among all optimizer versions. Oracle 12c is no different. Oracle has improved the optimizer and added new features in this release to make it better than previous release. In this article we are going to see some of the explicit new features of Oracle optimizer which helps us in tuning our queries. Objective In this article, Advait Deo and Indira Karnati, authors of the book OCP Upgrade 1Z0-060 Exam guide discusses new features of Oracle 12c optimizer and how it helps in improving the SQL plan. It also discusses some of the limitations of optimizer in previous release and how Oracle has overcome those limitations in this release. Specifically, we are going to discuss about dynamic plan and how it works (For more resources related to this topic, see here.) SQL Tuning Before we go into the details of each of these new features, let us rewind and check what we used to have in Oracle 11g. Behavior in Oracle 11g R1 Whenever an SQL is executed for the first time, an optimizer will generate an execution plan for the SQL based on the statistics available for the different objects used in the plan. If statistics are not available, or if the optimizer thinks that the existing statistics are of low quality, or if we have complex predicates used in the SQL for which the optimizer cannot estimate the cardinality, the optimizer may choose to use dynamic sampling for those tables. So, based on the statistics values, the optimizer generates the plan and executes the SQL. But, there are two problems with this approach: Statistics generated by dynamic sampling may not be of good quality as they are generated in limited time and are based on a limited sample size. But a trade-off is made to minimize the impact and try to approach a higher level of accuracy. The plan generated using this approach may not be accurate, as the estimated cardinality may differ a lot from the actual cardinality. The next time the query executes, it goes for soft parsing and picks the same plan. Behavior in Oracle 11g R2 To overcome these drawbacks, Oracle enhanced the dynamic sampling feature further in Oracle11g Release 2. In the 11.2 release, Oracle will automatically enable dynamic sample when the query is run if statistics are missing, or if the optimizer thinks that current statistics are not up to the mark. The optimizer also decides the level of the dynamic sample, provided the user does not set the non-default value of the OPTIMIZER_DYNAMIC_SAMPLING parameter (default value is 2). So, if this parameter has a default value in Oracle11g R2, the optimizer will decide when to spawn dynamic sampling in a query and at what level to spawn the dynamic sample. Oracle also introduced a new feature in Oracle11g R2 called cardinality feedback. This was in order to further improve the performance of SQLs, which are executed repeatedly and for which the optimizer does not have the correct cardinality, perhaps because of missing statistics, or complex predicate conditions, or because of some other reason. In such cases, cardinality feedback was very useful. The way cardinality feedback works is, during the first execution, the plan for the SQL is generated using the traditional method without using cardinality feedback. However, during the optimization stage of the first execution, the optimizer notes down all the estimates that are of low quality (due to missing statistics, complex predicates, or some other reason) and monitoring is enabled for the cursor that is created. If this monitoring is enabled during the optimization stage, then, at the end of the first execution, some cardinality estimates in the plan are compared with the actual estimates to understand how significant the variation is. If the estimates vary significantly, then the actual estimates for such predicates are stored along with the cursor, and these estimates are used directly for the next execution instead of being discarded and calculated again. So when the query executes the next time, it will be optimized again (hard parse will happen), but this time it will use the actual statistics or predicates that were saved in the first execution, and the optimizer will come up with better plan. But even with these improvements, there are drawbacks: With cardinality feedback, any missing cardinality or correct estimates are available for the next execution only and not for the first execution. So the first execution always go for regression. The dynamic sample improvements (that is, the optimizer deciding whether dynamic sampling should be used and the level of the dynamic sampling) are only applicable to parallel queries. It is not applicable to queries that aren't running in parallel. Dynamic sampling does not include joins and groups by columns. Oracle 12c has provided new improvements, which eliminates the drawbacks of Oracle11g R2. Adaptive execution plans – dynamic plans The Oracle optimizer chooses the best execution plan for a query based on all the information available to it. Sometimes, the optimizer may not have sufficient statistics or good quality statistics available to it, making it difficult to generate optimal plans. In Oracle 12c, the optimizer has been enhanced to adapt a poorly performing execution plan at run time and prevent a poor plan from being chosen on subsequent executions. An adaptive plan can change the execution plan in the current run when the optimizer estimates prove to be wrong. This is made possible by collecting the statistics at critical places in a plan when the query starts executing. A query is internally split into multiple steps, and the optimizer generates multiple sub-plans for every step. Based on the statistics collected at critical points, the optimizer compares the collected statistics with estimated cardinality. If the optimizer finds a deviation in statistics beyond the set threshold, it picks a different sub-plan for those steps. This improves the ability of the query-processing engine to generate better execution plans. What happens in adaptive plan execution? In Oracle12c, the optimizer generates dynamic plans. A dynamic plan is an execution plan that has many built-in sub-plans. A sub-plan is a portion of plan that the optimizer can switch to as an alternative at run time. When the first execution starts, the optimizer observes statistics at various critical stages in the plan. An optimizer makes a final decision about the sub-plan based on observations made during the execution up to this point. Going deeper into the logic for the dynamic plan, the optimizer actually places the statistics collected at various critical stages in the plan. These critical stages are the places in the plan where the optimizer has to join two tables or where the optimizer has to decide upon the optimal degree of parallelism. During the execution of the plan, the statistics collector buffers a portion of the rows. The portion of the plan preceding the statistics collector can have alternative sub-plans, each of which is valid for the subset of possible values returned by the collector. This means that each of the sub-plans has a different threshold value. Based on the data returned by the statistics collector, a sub-plan is chosen which falls in the required threshold. For example, an optimizer can insert a code to collect statistics before joining two tables, during the query plan building phase. It can have multiple sub-plans based on the type of join it can perform between two tables. If the number of rows returned by the statistics collector on the first table is less than the threshold value, then the optimizer might go with the sub-plan containing the nested loop join. But if the number of rows returned by the statistics collector is above the threshold values, then the optimizer might choose the second sub-plan to go with the hash join. After the optimizer chooses a sub-plan, buffering is disabled and the statistics collector stops collecting rows and passes them through instead. On subsequent executions of the same SQL, the optimizer stops buffering and chooses the same plan instead. With dynamic plans, the optimizer adapts to poor plan choices and correct decisions are made at various steps during runtime. Instead of using predetermined execution plans, adaptive plans enable the optimizer to postpone the final plan decision until statement execution time. Consider the following simple query: SELECT a.sales_rep, b.product, sum(a.amt) FROM sales a, product b WHERE a.product_id = b.product_id GROUP BY a.sales_rep, b.product When the query plan was built initially, the optimizer will put the statistics collector before making the join. So it will scan the first table (SALES) and, based on the number of rows returned, it might make a decision to select the correct type of join. The following figure shows the statistics collector being put in at various stages: Enabling adaptive execution plans To enable adaptive execution plans, you need to fulfill the following conditions: optimizer_features_enable should be set to the minimum of 12.1.0.1 optimizer_adapive_reporting_only should be set to FALSE (default) If you set the OPTIMIZER_ADAPTIVE_REPORTING_ONLY parameter to TRUE, the adaptive execution plan feature runs in the reporting-only mode—it collects the information for adaptive optimization, but doesn't actually use this information to change the execution plans. You can find out if the final plan chosen was the default plan by looking at the column IS_RESOLVED_ADAPTIVE_PLAN in the view V$SQL. Join methods and parallel distribution methods are two areas where adaptive plans have been implemented by Oracle12c. Adaptive execution plans and join methods Here is an example that shows how the adaptive execution plan will look. Instead of simulating a new query in the database and checking if the adaptive plan has worked, I used one of the queries in the database that is already using the adaptive plan. You can get many such queries if you check V$SQL with is_resolved_adaptive_plan = 'Y'. The following queries will list all SQLs that are going for adaptive plans. Select sql_id from v$sql where is_resolved_adaptive_plan = 'Y'; While evaluating the plan, the optimizer uses the cardinality of the join to select the superior join method. The statistics collector starts buffering the rows from the first table, and if the number of rows exceeds the threshold value, the optimizer chooses to go for a hash join. But if the rows are less than the threshold value, the optimizer goes for a nested loop join. The following is the resulting plan: SQL> SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'dhpn35zupm8ck',cursor_child_no=>0; Plan hash value: 3790265618 ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 445 (100)| | | 1 | SORT ORDER BY | | 1 | 73 | 445 (1)| 00:00:01| | 2 | NESTED LOOPS | | 1 | 73 | 444 (0)| 00:00:01| | 3 | NESTED LOOPS | | 151 | 73 | 444 (0)| 00:00:01| |* 4 | TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$ | 151 | 7701 | 293 (0)| 00:00:01| |* 5 | INDEX FULL SCAN | I_OBJ3 | 1 | | 20 (0)| 00:00:01| |* 6 | INDEX UNIQUE SCAN | I_TYPE2 | 1 | | 0 (0)| | |* 7 | TABLE ACCESS BY INDEX ROWID | TYPE$ | 1 | 22 | 1 (0)| 00:00:01| ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter(SYSDATE@!-"O"."CTIME">.0007) 5 - filter("O"."OID$" IS NOT NULL) 6 - access("O"."OID$"="T"."TVOID") 7 - filter(BITAND("T"."PROPERTIES",8388608)=8388608) Note ----- - this is an adaptive plan If we check this plan, we can see the notes section, and it tells us that this is an adaptive plan. It tells us that the optimizer must have started with some default plan based on the statistics in the tables and indexes, and during run time execution it changed the join method for a sub-plan. You can actually check which step optimizer has changed and at what point it has collected the statistics. You can display this using the new format of DBMS_XPLAN.DISPLAY_CURSOR – format => 'adaptive', resulting in the following: DEO>SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(sql_id=>'dhpn35zupm8ck',cursor_child_no=>0,format=>'adaptive')); Plan hash value: 3790265618 ------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | | | 445 (100)| | | 1 | SORT ORDER BY | | 1 | 73 | 445 (1)| 00:00:01 | |- * 2 | HASH JOIN | | 1 | 73 | 444 (0)| 00:00:01 | | 3 | NESTED LOOPS | | 1 | 73 | 444 (0)| 00:00:01 | | 4 | NESTED LOOPS | | 151 | 73 | 444 (0)| 00:00:01 | |- 5 | STATISTICS COLLECTOR | | | | | | | * 6 | TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$ | 151 | 7701 | 293 (0)| 00:00:01 | | * 7 | INDEX FULL SCAN | I_OBJ3 | 1 | | 20 (0)| 00:00:01 | | * 8 | INDEX UNIQUE SCAN | I_TYPE2 | 1 | | 0 (0)| | | * 9 | TABLE ACCESS BY INDEX ROWID | TYPE$ | 1 | 22 | 1 (0)| 00:00:01 | |- * 10 | TABLE ACCESS FULL | TYPE$ | 1 | 22 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("O"."OID$"="T"."TVOID") 6 - filter(SYSDATE@!-"O"."CTIME">.0007) 7 - filter("O"."OID$" IS NOT NULL) 8 - access("O"."OID$"="T"."TVOID") 9 - filter(BITAND("T"."PROPERTIES",8388608)=8388608) 10 - filter(BITAND("T"."PROPERTIES",8388608)=8388608) Note ----- - this is an adaptive plan (rows marked '-' are inactive) In this output, you can see that it has given three extra steps. Steps 2, 5, and 10 are extra. But these steps were present in the original plan when the query started. Initially, the optimizer generated a plan with a hash join on the outer tables. During runtime, the optimizer started collecting rows returned from OBJ$ table (Step 6), as we can see the STATISTICS COLLECTOR at step 5. Once the rows are buffered, the optimizer came to know that the number of rows returned by the OBJ$ table are less than the threshold and so it can go for a nested loop join instead of a hash join. The rows indicated by - in the beginning belong to the original plan, and they are removed from the final plan. Instead of those records, we have three new steps added—Steps 3, 8, and 9. Step 10 of the full table scan on the TYPE$ table is changed to an index unique scan of I_TYPE2, followed by the table accessed by index rowed at Step 9. Adaptive plans and parallel distribution methods Adaptive plans are also useful in adapting from bad distributing methods when running the SQL in parallel. Parallel execution often requires data redistribution to perform parallel sorts, joins, and aggregates. The database can choose from among multiple data distribution methods to perform these options. The number of rows to be distributed determines the data distribution method, along with the number of parallel server processes. If many parallel server processes distribute only a few rows, the database chooses a broadcast distribution method and sends the entire result set to all the parallel server processes. On the other hand, if a few processes distribute many rows, the database distributes the rows equally among the parallel server processes by choosing a "hash" distribution method. In adaptive plans, the optimizer does not commit to a specific broadcast method. Instead, the optimizer starts with an adaptive parallel data distribution technique called hybrid data distribution. It places a statistics collector to buffer rows returned by the table. Based on the number of rows returned, the optimizer decides the distribution method. If the rows returned by the result are less than the threshold, the data distribution method switches to broadcast distribution. If the rows returned by the table are more than the threshold, the data distribution method switches to hash distribution. Summary In this article we learned the explicit new features of Oracle optimizer which helps us in tuning our queries. Resources for Article: Further resources on this subject: Oracle Essbase System 9 Components [article] Oracle E-Business Suite: Adjusting Items in Inventory and Classifying Items [article] Oracle Business Intelligence : Getting Business Information from Data [article]
Read more
  • 0
  • 0
  • 5035

article-image-editor-tool-tutorial
Ahmed Elgoni
13 Dec 2016
6 min read
Save for later

An Editor Tool Tutorial

Ahmed Elgoni
13 Dec 2016
6 min read
Often, when making games, we use different programs to author the various content that goes into our game. When it’s time to stick all the pieces together, things have a habit of catching fire/spiders. Avoiding flaming arachnids is usually a good idea, so in this tutorial, we’re going to learn how to do just that. Note that for this tutorial, I’m assuming that you have some experience writing gameplay code in Unity and want to learn the ways to improve your workflow. We’re going to write a small editor tool to turn a text file representation of a level, which we’re pretending was authored in an external-level editing program, into an actual level. We’re going to turn this: Into this: Protip: Before writing any tool, it’s important that you weigh the pros and cons of writing that tool. Many people make the mistake of taking a long time to write a complicated tool, and what ends up happening is that the tool takes longer to write than it would have taken to do the tool’s job manually! Doing stuff manually isn’t fun, but remember, the end goal is to save time, not to show off your automation skills. First up, let’s understand what our level file is actually telling us. Take a look at the first image again. Every dot is a space character and the two letter words represent different kinds of tiles. From our imaginary-level design program, we know that the keywords are as follows: c1 = cloud_1 bo = bonus_tile gr = grass_tile wa = water_tile We need some way to represent this mapping of a keyword to an asset in Unity. To do this, we’re going to use scriptable objects. What is a scriptable object? From the Unity documentation: “ScriptableObject is a class that allows you to store large quantities of shared data independent from script instances.” They’re most commonly used as data containers for things such as item descriptions, dialogue trees or, in our case, a description of our keyword mapping. Our scriptable object will look like this: Take a look at the source code behind this here. Let’s go through some of the more outlandish lines of code: [System.Serializable] This tells Unity that the struct should be serialized. For our purposes, it means that the struct will be visible in the inspector and also retain its values after we enter and exit play mode. [CreateAssetMenu(fileName = "New Tileset", menuName = "Tileset/New", order = 1)] This is a nifty attribute that Unity provides for us, which lets us create an instance of our scriptable object from the assets menu. The attribute arguments are described here. In action, it looks like this: You can also access this menu by right clicking in the project view. Next up, let’s look at the actual editor window. Here’s what we want to achieve: It should take a map file, like the one at the start of this tutorial, and a definition of what the keywords in that file map to (the scriptable object we’ve just written), and use that to generate a map where each tile is TileWidth wide and TileHeight tall. The source code for the TileToolWIndow is available here. Once again, let’s take a look at the stranger lines of code: using UnityEditor; public class TileToolWindow : EditorWindow To make editor tools, we need to include the editor namespace. We’re also inheriting the EditorWindow class instead of the usual Monobehaviour. Protip: When you have using UnityEditor; in a script, your game will run fine in the editor, but will refuse to compile to a standalone. The reason for this is that the UnityEditor namespace isn’t included in compiled builds, so your scripts will be referencing to a namespace that doesn’t exist anymore. To get around this, you can put your code in a special folder called Editor, which you can create anywhere in your Assets folder. What Unity does is that it ignores any scripts in this folder, or subsequent subfolders, when it’s time to compile code. You can have multiple Editor folders, and they can be inside of other folders. You can find out more about special folders here. [MenuItem("Tools/Tile Tool")] public static void InitializeWindow() { TileToolWindow window = EditorWindow.GetWindow(typeof(TileToolWindow)) as TileToolWindow; window.titleContent.text = "Tile Tool"; window.titleContent.tooltip = "This tool will instantiate a map based off of a text file"; window.Show(); } The [MenuItem("Path/To/Option")] attribute will create a menu item in the toolbar. You can only use this attribute with static functions. It looks like this: In the InitializeWindow function, we create a window and change the window title and tooltip. Finally, we show the window. Something to look out for is Editor.GetWindow, which returns an EditorWindow. We have to cast it to our derived class type if we want to be able to use any extended functions or fields. private void OnGUI() We use OnGUI just like we would with a MonoBehaviour. The only difference is that we can now make calls to the EditorGUI and EditorGUILayout classes. These contain functions that allow us to create controls that look like native Unity editor UI. A lot of the EditorGUI functions require some form of casting. For example: EditorGUILayout.ObjectField("Map File", mapFile, typeof(TextAsset), false) as TextAsset; Remember that if you’re getting compilation errors, it’s likely that you’re forgetting to cast to the correct variable type. The last strange lines are: Undo.RegisterCreatedObjectUndo(folderObject.gameObject, "Create Folder Object"); And: GameObject instantiatedPrefab = Instantiate(currentPrefab, new Vector3(j * tileWidth, i * tileHeight, 0), Quaternion.identity, folderObject) as GameObject; Undo.RegisterCreatedObjectUndo(instantiatedPrefab, "Create Tile"); The Undo class does exactly what it says on the tin. It’s always a good idea to allow your editor tool operations to be undone, especially if they follow a complicated set of steps! The rest of the code is relatively straightforward. If you want to take a look at the complete project, you can find it on GitHub. Thanks for reading! About the author Ahmed Elgoni is a wizard who is currently masquerading as a video game programmer. When he's not doing client work, he's probably trying to turn a tech demo into a cohesive game. One day, he'll succeed! You can find him on twitter @stray_train.
Read more
  • 0
  • 0
  • 1642

article-image-getting-started-react-and-bootstrap
Packt
13 Dec 2016
18 min read
Save for later

Getting Started with React and Bootstrap

Packt
13 Dec 2016
18 min read
In this article by Mehul Bhat and Harmeet Singh, the authors of the book Learning Web Development with React and Bootstrap, you will learn to build two simple real-time examples: Hello World! with ReactJS A simple static form application with React and Bootstrap There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and there is a lot of new theory to learn. This article will introduce you to ReactJS and Bootstrap, which you will likely come across as you learn about modern web application development. It will then take you through the theory behind the Virtual DOM and managing the state to create interactive, reusable and stateful components ourselves and then push it to a Redux architecture. You will have to learn very carefully about the concept of props and state management and where it belongs. If you want to learn code then you have to write code, in whichever way you feel comfortable. Try to create small components/code samples, which will give you more clarity/understanding of any technology. Now, let's see how this article is going to make your life easier when it comes to Bootstrap and ReactJS. Facebook has really changed the way we think about frontend UI development with the introduction of React. One of the main advantages of this component-based approach is that it is easy to understand, as the view is just a function of the properties and state. We're going to cover the following topics: Setting up the environment ReactJS setup Bootstrap setup Why Bootstrap? Static form example with React and Bootstrap (For more resources related to this topic, see here.) ReactJS React (sometimes called React.js or ReactJS) is an open-source JavaScript library that provides a view for data rendered as HTML. Components have been used typically to render React views that contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow, and explicit mutation. It is very methodical in updating the HTML document when the data changes; and provides a clean separation of components on a modern single-page application. From below example, we will have clear idea on normal HTML encapsulation and ReactJS custom HTML tag. JavaScript code: <section> <h2>Add your Ticket</h2> </section> <script> var root = document.querySelector('section').createShadowRoot(); root.innerHTML = '<style>h2{ color: red; }</style>' + '<h2>Hello World!</h2>'; </script> ReactJS code: var sectionStyle = { color: 'red' }; var AddTicket = React.createClass({ render: function() { return (<section><h2 style={sectionStyle}> Hello World!</h2></section>)} }) ReactDOM.render(<AddTicket/>, mountNode); As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner. The React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only the V in MVC, it has stateful components (stateful components remembers everything within this.state). It handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC does. Let's look at React's component life cycle and its different levels. Observe the following screenshot: React isn't an MVC framework; it's a library for building a composable user interface and reusable components. React used at Facebook in production and https://www.instagram.com/ is entirely built on React. Setting up the environment When we start to make an application with ReactJS, we need to do some setup, which just involves an HTML page and includes a few files. First, we create a directory (folder) called Chapter 1. Open it up in any code editor of your choice. Create a new file called index.html directly inside it and add the following HTML5 boilerplate code: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <p>Hello world! This is HTML5 Boilerplate.</p> </body> </html> This is a standard HTML page that we can update once we have included the React and Bootstrap libraries. Now we need to create couple of folders inside the Chapter 1 folder named images, css, and js (JavaScript) to make your application manageable. Once you have completed the folder structure it will look like this: Installing ReactJS and Bootstrap Once we have finished creating the folder structure, we need to install both of our frameworks, ReactJS and Bootstrap. It's as simple as including JavaScript and CSS files in your page. We can do this via a content delivery network (CDN), such as Google or Microsoft, but we are going to fetch the files manually in our application so we don't have to be dependent on the Internet while working offline. Installing React First, we have to go to this URL https://facebook.github.io/react/ and hit the download button. This will give you a ZIP file of the latest version of ReactJS that includes ReactJS library files and some sample code for ReactJS. For now, we will only need two files in our application: react.min.js and react-dom.min.js from the build directory of the extracted folder. Here are a few steps we need to follow: Copy react.min.js and react-dom.min.js to your project directory, the Chapter 1/js folder, and open up your index.html file in your editor. Now you just need to add the following script in your page's head tag section: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> Now we need to include the compiler in our project to build the code because right now we are using tools such as npm. We will download the file from the following CDN path, https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js, or you can give the CDN path directly. The Head tag section will look this: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> <script type="text/js" src="js/browser.min.js"></script> Here is what the final structure of your js folder will look like: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a fast and easy way to develop a powerful mobile first user interface. The Bootstrap grid system allows you to create responsive 12-column grids, layouts, and components. It includes predefined classes for easy layout options (fixed-width and full width). Bootstrap has a dozen pre-styled reusable components and custom jQuery plugins, such as button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons, and much more. Installing Bootstrap Now, we need to install Bootstrap. Visit http://getbootstrap.com/getting-started/#download and hit on the Download Bootstrap button. This includes the compiled and minified version of css and js for our app; we just need the CSS bootstrap.min.css and fonts folder. This style sheet will provide you with the look and feel of all of the components, and is responsive layout structure for our application. Previous versions of Bootstrap included icons as images but, in version 3, icons have been replaced as fonts. We can also customize the Bootstrap CSS style sheet as per the component used in your application: Extract the ZIP folder and copy the Bootstrap CSS from the css folder to your project folder CSS. Now copy the fonts folder of Bootstrap into your project root directory. Open your index.html in your editor and add this link tag in your head section: <link rel="stylesheet" href="css/bootstrap.min.css"> That's it. Now we can open up index.html again, but this time in your browser, to see what we are working with. Here is the code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> </body> </html> Using React So now we've got the ReactJS and Bootstrap style sheet and in there we've initialized our app. Now let's start to write our first Hello World app using reactDOM.render(). The first argument of the ReactDOM.render method is the component we want to render and the second is the DOM node to which it should mount (append) to: ReactDOM.render( ReactElement element, DOMElement container, [function callback]) In order to translate it to vanilla JavaScript, we use wraps in our React code, <script type"text/babel">, tag that actually performs the transformation in the browser. Let's start out by putting one div tag in our body tag: <div id="hello"></div> Now, add the script tag with the React code: <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> Let's open the HTML page in your browser. If you see Hello, world! in your browser then we are on good track. In the preceding screenshot, you can see it shows the Hello, world! in your browser. That's great. We have successfully completed our setup and built our first Hello World app. Here is the full code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <div id="hello"></div> <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> </body> </html> Static form with React and Bootstrap We have completed our first Hello World app with React and Bootstrap and everything looks good and as expected. Now it's time do more and create one static login form, applying the Bootstrap look and feel to it. Bootstrap is a great way to make you app responsive grid system for different mobile devices and apply the fundamental styles on HTML elements with the inclusion of a few classes and div's. The responsive grid system is an easy, flexible, and quick way to make your web application responsive and mobile first that appropriately scales up to 12 columns per device and viewport size. First, let's start to make an HTML structure to follow the Bootstrap grid system. Create a div and add a className .container for (fixed width) and .container-fluid for (full width). Use the className attribute instead of using class: <div className="container-fluid"></div> As we know, class and for are discouraged as XML attribute names. Moreover, these are reserved words in many JavaScript libraries so, to have clear difference and identical understanding, instead of using class and for, we can use className and htmlFor create a div and adding the className row. The row must be placed within .container-fluid: <div className="container-fluid"> <div className="row"></div> </div> Now create columns that must be immediate children of a row: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"></div> </div> </div> .row and .col-xs-4 are predefined classes that available for quickly making grid layouts. Add the h1 tag for the title of the page: <div className="container-fluid"> <div className="row"> <div className="col-sm-6"> <h1>Login Form</h1> </div> </div> </div> Grid columns are created by the given specified number of col-sm-* of 12 available columns. For example, if we are using a four column layout, we need to specify to col-sm-3 lead-in equal columns. Col-sm-* Small devices Col-md-* Medium devices Col-lg-* Large devices We are using the col-sm-* prefix to resize our columns for small devices. Inside the columns, we need to wrap our form elements label and input tags into a div tag with the form-group class: <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> Forget the style of Bootstrap; we need to add the form-control class in our input elements. If we need extra padding in our label tag then we can add the control-label class on the label. Let's quickly add the rest of the elements. I am going to add a password and submit button. In previous versions of Bootstrap, form elements were usually wrapped in an element with the form-actions class. However, in Bootstrap 3, we just need to use the same form-group instead of form-actions. Here is our complete HTML code: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> Now create one object inside the var loginFormHTML script tag and assign this HTML them: Var loginFormHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> We will pass this object in the React.DOM()method instead of directly passing the HTML: ReactDOM.render(LoginformHTML,document.getElementById('hello')); Our form is ready. Now let's see how it looks in the browser: The compiler is unable to parse our HTML because we have not enclosed one of the div tags properly. You can see in our HTML that we have not closed the wrapper container-fluid at the end. Now close the wrapper tag at the end and open the file again in your browser. Note: Whenever you hand-code (write) your HTML code, please double check your "Start Tag" and "End Tag". It should be written/closed properly, otherwise it will break your UI/frontend look and feel. Here is the HTML after closing the div tag: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="js/browser.min.js"></script> </head> <body> <!-- Add your site or application content here --> <div id="loginForm"></div> <script type="text/babel"> var LoginformHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form-control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> ReactDOM.render(LoginformHTML,document.getElementById('loginForm'); </script> </body> </html> Now, you can check your page on browser and you will be able to see the form with below shown look and feel. Now it's working fine and looks good. Bootstrap also provides two additional classes to make your elements smaller and larger: input-lg and input-sm. You can also check the responsive behavior by resizing the browser. That's look great. Our small static login form application is ready with responsive behavior. Some of the benefits are: Rendering your component is very easy Reading component's code would be very easy with help of JSX JSX will also help you to check your layout as well as components plug-in with each other You can test your code easily and it also allows other tools integration for enhancement As we know, React is view layer, you can also use it with other JavaScript frameworks Summary Our simple static login form application and Hello World examples are looking great and working exactly how they should, so let's recap what we've learned in the this article. To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. We also looked at how the React application is initialized and started building our first form application. The Hello World app and form application which we have created demonstrates some of React's and Bootstrap's basic features such as the following: ReactDOM Render Browserify Bootstrap With Bootstrap, we worked towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 13983
article-image-tableau-data-extract-best-practices
Packt
12 Dec 2016
11 min read
Save for later

Tableau Data Extract Best Practices

Packt
12 Dec 2016
11 min read
In this article by Jenny Zhang, author of the book Tableau 10.0 Best Practices, you will learn the Best Practices about Tableau Data Extract. We will look into different ways of creating Tableau data extracts and technical details of how a Tableau data extract works. We will learn on how to create extract with large volume of data efficiently, and then upload and manage Tableau data extract in Tableau online. We will also take a look at refresh Tableau data extract, which is useful to keep your data up to date automatically. Finally, we will take a look using Tableau web connector to create data extract. (For more resources related to this topic, see here.) Different ways of creating Tableau data extracts Tableau provides a few ways to create extracts. Direct connect to original data sources Creating an extract by connecting to the original data source (Databases/Salesforce/Google Analytics and so on) will maintain the connection to the original data source. You can right click the extract to edit the extract and refresh the extract from the original data source. Duplicate of an extract If you create a duplicate of the extract by right click the data extract and duplicate, it will create a new .tde file and still maintain the connection to the original data source. If you refresh the duplicated data extract, it will not refresh the original data extract that you created the duplicate from. Connect to a Tableau Extract File If you create a data extract by connecting to a Tableau extract file (.tde), you will not have that connection to the original data source that the extract is created from since you are just connecting to a local .tde file. You cannot edit or refresh the data from the original data source. Duplicate this extract with connection to the local .tde file will NOT create a new .tde file. The duplication will still point to the same local .tde file. You can right click – Extract Data to create an extract out of an extract. But we do not normally do that. Technical details of how a Tableau data extract works Tableau data extract’s design principle A Tableau extract (.tde) file is a compressed snapshot of data extracted from a large variety of original data sources (excel, databases, Salesforce, NoSQL and so on). It is stored on disk and loaded into memory as required to create a Tableau Viz. There are two design principles of the Tableau extract make it ideal for data analytics. The first principle is Tableau extract is a columnar store. The columnar databases store column values rather than row values. The benefit is that the input/output time required to access/aggregate the values in a column is significantly reduced. That is why Tableau extract is great for data analytics. The second principle is how a Tableau extract is structured to make sure it makes best use of your computer’s memory. This will impact how it is loaded into memory and used by Tableau. To better understand this principle, we need to understand how Tableau extract is created and used as the data source to create visualization. When Tableau creates data extract, it defines the structure of the .tde file and creates separate files for each column in the original data source. When Tableau retrieves data from the original data source, it sorts, compresses and adds the values for each column to their own file. After that, individual column files are combined with metadata to form a single file with as many individual memory-mapped files as there are the columns in the original data source. Because a Tableau data extract file is a memory-mapped file, when Tableau requests data from a .tde file, the data is loaded directly into the memory by the operating system. Tableau does not have to open, process or decompress the file. If needed, the operating system continues to move data in and out of RAM to insure that all of the requested data is made available to Tableau. It means that Tableau can query data that is bigger than the RAM on the computer. Benefits of using Tableau data extract Following are the seven main benefits of using Tableau data extract Performance: Using Tableau data extract can increase performance when the underlying data source is slow. It can also speed up CustomSQL. Reduce load: Using Tableau data extract instead of a live connection to databases reduces the load on the database that can result from heavy traffic. Portability: Tableau data extract can be bundled with the visualizations in a packaged workbook for sharing with others. Pre-aggregation: When creating extract, you can choose to aggregate your data for certain dimensions. An aggregated extract has smaller size and contains only aggregated data. Accessing the values of aggregations in a visualization is very fast since all of the work to derive the values has been done. You can choose the level of aggregation. For example, you can choose to aggregate your measures to month, quarter, or year. Materialize calculated fields: When you choose to optimize the extract, all of the calculated fields that have been defined are converted to static values upon the next full refresh. They become additional data fields that can be accessed and aggregated as quickly as any other fields in the extract. The improvement on performance can be significant especially on string calculations since string calculations are much slower compared to numeric or date calculations. Publish to Tableau Public and Tableau Online: Tableau Public only supports Tableau extract files. Though Tableau Online can connect to some cloud based data sources, Tableau data extract is most common used. Support for certain function not available when using live connection: Certain function such as count distinct is only available when using Tableau data extract. How to create extract with large volume of data efficiently Load very large Excel file to Tableau If you have an Excel file with lots of data and lots of formulas, it could take a long time to load into Tableau. The best practice is to save the Excel as a .csv file and remove all the formulas. Aggregate the values to higher dimension If you do not need the values down to the dimension of what it is in the underlying data source, aggregate to a higher dimension will significantly reduce the extract size and improve performance. Use Data Source Filter Add a data source filter by right click the data source and then choose to Edit Data Source Filter to remove the data you do not need before creating the extract. Hide Unused Fields Hide unused fields before creating a data extract can speed up extract creation and also save storage space. Upload and manage Tableau data extract in Tableau online Create Workbook just for extracts One way to create extracts is to create them in different workbooks. The advantage is that you can create extracts on the fly when you need them. But the disadvantage is that once you created many extracts, it is very difficult to manage them. You can hardly remember which dashboard has which extracts. A better solution is to use one workbook just to create data extracts and then upload the extracts to Tableau online. When you need to create visualizations, you can use the extracts in Tableau online. If you want to manage the extracts further, you can use different workbooks for different types of data sources. For example, you can use one workbook for excel files, one workbook for local databases, one workbook for web based data and so on. Upload data extracts to default project The default project in Tableau online is a good place to store your data extracts. The reason is that the default project cannot be deleted. Another benefit is that when you use command line to refresh the data extracts, you do not need to specify project name if they are in the default project. Make sure Tableau online/server has enough space In Tableau Online/Server, it’s important to make sure that the backgrounder has enough disk space to store existing Tableau data extracts as well as refresh them and create new ones. A good rule of thumb is the size of the disk available to the backgrounder should be two to three times the size of the data extracts that are expected to be stored on it. Refresh Tableau data extract Local refresh of the published extract: Download a Local Copy of the Data source from Tableau Online. Go to Data Sources tab Click on the name of the extract you want to download Click download Refresh the Local Copy. Open the extract file in Tableau Desktop Right click on the data source in, and choose Extract- refresh Publish the refreshed Extract to Tableau Online. Right lick the extract and click Publish to server You will be asked if you wish to overwrite a file with the same name and click yes NOTE 1 If you need to make changes to any metadata, please do it before publishing to the server. NOTE 2 If you use the data extract in Tableau Online to create visualizations for multiple workbooks (which I believe you do since that is the benefit of using a shared data source in Tableau Online), please be very careful when making any changes to the calculated fields, groups, or other metadata. If you have other calculations created in the local workbook with the same name as the calculations in the data extract in Tableau Online, the Tableau Online version of the calculation will overwrite what you created in the local workbook. So make sure you have the correct calculations in the data extract that will be published to Tableau Online. Schedule data extract refresh in Tableau Online Only cloud based data sources (eg. Salesforce, Google analytics) can be refreshed using schedule jobs in Tableau online. One option is to use Tableau Desktop command to refresh non-cloud based data source in Tableau Online. Windows scheduler can be used to automate the refresh jobs to update extracts via Tableau Desktop command. Another option is to use the sync application or manually refresh the extracts using Tableau Desktop. NOTE If using command line to refresh the extract, + cannot be used in the data extract name. Tips for Incremental Refreshes Following are the tips for incremental refrences: Incremental extracts retrieve only new records from the underlying data source which reduces the amount of time required to refresh the data extract. If there are no new records to add during an incremental extract, the processes associated with performing an incremental extract still execute. The performance of incremental refresh is decreasing over time. This is because incremental extracts only grow in size, and as a result, the amount of data and areas of memory that must be accessed in order to satisfy requests only grow as well. In addition, larger files are more likely to be fragmented on a disk than smaller ones. When performing an incremental refresh of an extract, records are not replaced. Therefore, using a date field such as “Last Updated” in an incremental refresh could result in duplicate rows in the extract. Incremental refreshes are not possible after an additional file has been appended to a file based data source because the extract has multiple sources at that point. Use Tableau web connector to create data extract What is Tableau web connector? The Tableau Web Data Connector is the API that can be used by people who want to write some code to connect to certain web based data such as a web page. The connectors can be written in java. It seems that these web connectors can only connect to web pages, web services and so on. It can also connect to local files. How to use Tableau web connector? Click on Data | New Data source | Web Data Connector. Is the Tableau web connection live? The data is pulled when the connection is build and Tableau will store the data locally in Tableau extract. You can still refresh the data manually or via schedule jobs. Are there any Tableau web connection available? Here is a list of web connectors around the Tableau community: Alteryx: http://data.theinformationlab.co.uk/alteryx.html Facebook: http://tableaujunkie.com/post/123558558693/facebook-web-data-connector You can check the tableau community for more web connectors Summary In summary, be sure to keep in mind the following best practices for data extracts: Use full fresh when possible. Fully refresh the incrementally refreshed extracts on a regular basis. Publish data extracts to Tableau Online/Server to avoid duplicates. Hide unused fields/ use filter before creating extracts to improve performance and save storage space. Make sure there is enough continuous disk space for the largest extract file. A good way is to use SSD drivers. Resources for Article: Further resources on this subject: Getting Started with Tableau Public [article] Introduction to Practical Business Intelligence [article] Splunk's Input Methods and Data Feeds [article]
Read more
  • 0
  • 0
  • 16226

article-image-agent-roles-groups-organizations-and-user-tags
Packt
12 Dec 2016
13 min read
Save for later

Agent Roles, Groups, Organizations, and User-Tags

Packt
12 Dec 2016
13 min read
In this article by Cedric Jacob, the author of the book Mastering Zendesk, we will learn about agent roles, groups, organizations, user-tags and how each item can be setup to serve a more advanced Zendesk setup. The reader will be guided through the different use-cases by applying the necessary actions based on the established road map. When it comes to working with an environment such as Zendesk, which was built to communicate with millions of customers, it is absolutely crucial to understand how we can manage our user accounts and their tickets without losing track of our processes. However, even when working with a smaller customer base, keeping scalability in mind, we should apply the same diligence when it comes to planning our agent roles, groups, organizations, and user tags. This article will cover the following topics: Users / agents / custom agent roles Groups Organizations User tags (For more resources related to this topic, see here.) Users/agents In Zendesk, agents are just like end-users and are classified as users. Both can be located in the same pool of listed accounts. The difference however can be found in the assigned role. The role defines what a user can or cannot do. End-users for example do not posses the necessary rights to log in to the actual helpdesk environment. Easily enough, the role for end-users is called End-user. In Zendesk, users are also referred to as people. Both are equivalent terms. The same applies to the two terms end-users and customers. You can easily access the whole list of users by following these two steps: Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu: Unlike for end-users, there are a few different roles that can be assigned to an agent. Out of the box, Zendesk offers the following options: Agent/Staff Team Leader Advisor Administrator While the agent and staff roles come with the necessary permissions in order to solve tickets, the team leader role allows more access to the Zendesk environment. The advisor role, in contrast, cannot solve any tickets. This role is supposed to enable the user to manage Zendesk's workflows. This entails the ability to create and edit automations, triggers, macros, views, and SLAs. The admin role includes some additional permissions allowing the user to customize and manage the Zendesk environment. Note: The number of available roles depends on your Zendesk plan. The ability to create custom roles requires the Enterprise version of Zendesk. If you do not have the option to create custom roles and do not wish to upgrade to the Enterprise plan, you may still want to read on. Other plans still allow you to edit the existing roles. Custom agent roles Obviously, we are scratching on the surface here. So let's take a closer look into the roles by creating our own custom agent role. In order to create your own custom role, simply follow these steps:  Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu. Click on role located at the top of the main area (next to "add"): The process of creating a custom role consists of naming and describing the role followed by defining the permissions: Permissions are categorized under the following headlines: Tickets People Help Center Tools Channels System Each category houses options to set individual permissions concerning that one specific topic. Let's examine these categories one by one and decide on each setting for our example role–Tier 1 Agent. Ticket Permissions In the first part, we can choose what permissions the agent should receive when it comes to handling tickets: What kind of tickets can this agent access? Those assigned to the agent only Those requested by users in this agent's organization All those within this agent's group(s) All Agent can assign the ticket to any group? Yes No What type of comments can this agent make? Private only Public and private Can edit ticket properties? Yes No Can delete tickets? Yes No Can merge tickets? Yes No Can edit ticket tags? Yes No People Permissions The second part allows us to set permissions regarding the agent's ability to manage other users/people: What access does this agent have to end-user profiles? Read only Add and edit within their organization Add, edit, and delete all May this user view lists of user profiles? Cannot browse or search for users Can view all users in your account Can add or modify groups and organizations? Yes No So what kind of access should our agent have to end-user profiles? For now, we will go for option one and choose Read only. It would make sense to forward more complicated tickets to our "Tier 2" support, who receive the permission to edit end-user profiles. Should our agent be allowed to view the full list of users? In some cases, it might be helpful if agents can search for users within the Zendesk system. In this case, we will answer our question with a "yes" and check the box. Should the agent be allowed to modify groups and organizations? None of our planned workflows seem to require this permission. We will not check this box and therefore remove another possible source of error. Help Center Permissions The third part concerns the Help Center permissions: Can manage Help Center? Yes No Does our agent need the ability to edit the Help Center? As the primary task of our "Tier 1" agents consists of answering tickets, we will not check this box and leave this permission to our administrators. Tools Permissions The fourth part gives us the option to set permissions that allow agents to make use of Zendesk Tools: What can this agent do with reports? Cannot view Can view only Can view, add, and edit What can this agent do with views? Play views only See views only Add and edit personal views Add and edit personal and group views Add and edit personal, group, and global views What can this agent do with macros? Cannot add or edit Can add and edit personal macros Can add and edit personal and group macros Can add and edit personal, group, and global macros Can access dynamic content? Yes No Should our agent have the permission to view, edit, and add reports? We do not want our agents to interact with Zendesk's reports on any level. We might, instead, want to create custom reports via GoodData, which can be sent out via e-mail to our agents. Therefore, in this case, we choose the option Cannot view. What should the agent be allowed to do with views? As we will set up all the necessary views for our agents, we will go for the "See views only" option. If there is a need for private views later on, we can always come back and change this setting retroactively. What should the "Tier 1" agent be allowed to do when it comes to macros? In our example, we want to create a very streamlined support. All creation of content should take place at the administrative level and be handled by team leaders. Therefore, we will select the "Cannot add or edit" option. Should the agent be allowed to access dynamic content? We will not check this option. The same reasons apply here: content creation will happen at the administrative level. Channels Permissions The fifth part allows us to set any permissions related to ticket channels: Can manage Facebook pages? Yes No There is no need for our "Tier 1" agents to receive any of these permissions as they are of an administrative nature. System Permissions Last but not least, we can decide on some more global system-related permissions: Can manage business rules? Yes No Can manage channels and extensions? Yes No Again, there is no need for our "Tier 1" agent to receive these permissions as they are of an administrative nature. Groups Groups, unlike organizations, are only meant for agents and each agent must be at least in one group. Groups play a major role when it comes to support workflows and can be used in many different ways. How to use groups becomes apparent when planning your support workflow. In our case, we have four types of support tickets: Tier 1 Support Tier 2 Support VIP Support Internal Support Each type of ticket is supposed to be answered by specific agents only. In order to achieve this, we can create one group for each type of ticket and later assign these groups to our agents accordingly. In order to review and edit already existing groups, simply follow these steps: Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu. Click on groups located under the search bar within the main area: Creating a group is easy. We simply choose a name and tick the box next to each agent that we would like to be associated with this group: There are two ways to add an agent to a group. While you may choose to navigate to the group itself in order to edit it, you can also assign groups to agents within their own user panel. Organizations Organizations can be very helpful when managing workflows, though there is no imperative need to associate end-users with an organization. Therefore, we should ask ourselves this: Do we need to use organizations to achieve our desired workflows? Before we can answer this question, let's take a look at how organizations work in Zendesk: When creating an organization within Zendesk, you may choose one or more domains associated with that organization. As soon as an end-user creates a ticket using an e-mail address with that specific domain, the user is added to that organization. There are a few more things you can set within an organization. So let's take a quick look at all the available options. In order to add a new organization, simply follow these steps: Click on the admin icon (gear symbol) located at the bottom of Zendesk's sidebar. Click on People located under MANAGE within the admin menu. Click on organization located at the top of the main area (next to add): When adding a new organization, Zendesk asks you to provide the following details: The name of the organization The associated domains Once we click on Save, Zendesk automatically opens this organization as a tab and shows next to any ticket associated with the organization. Here are a few more options we can set up: Tags Domains Group Users Details Notes Tags Zendesk allows us to define tags, that would automatically be added to each ticket, created by a user within this organization. Domains We can add as many associated domains as we need. Each domain should be separated by a single space. Group Tickets associated with this organization can be assigned to a group automatically. We can choose any group via a drop-down menu. Users We get the following two options to choose from: Can view own tickets only Can view all org tickets This allows us to enable users, who are part of this organization, to only view their own tickets or to review all the tickets created by users within this organization. If we choose for users to view all the tickets within their organization, we receive two more options: ...but not add comments ...and add comments Details We may add additional information about the organization such as an address. Notes Additionally, we may add notes only visible for agents. User-Tags To understand user tags, we need to understand how Zendesk utilizes tags and how they can help us. Tags can be added for users, organizations, and tickets, while user tags and organization tags will be ultimately applied to tickets when they are created. For instance, if a user is tagged with the vip tag, all his tickets will subsequently be tagged with the vip tag as well: We can then use that tag as a condition in our business rules. But how can we set user tags without having to do some manually? This is a very important question. In our flowchart, we require the knowledge whether a customer is in fact a VIP user in order for our business rules to escalate the tickets according to our SLA rules. Let's take a quick look at our plan: We could send VIP information via Support Form. We could use SSO and set the VIP status via a user tag. We could set the user tag via API when the subscription is bought. In our first option, we would try to send a tag from our support form to Zendesk so that the ticket is tagged accordingly. In our second option, we would set the user tag and subsequently the ticket tag via SSO (Single Sign-On). In our last option, we would set the user tag via the Zendesk API when a subscription is bought. We remember that a customer of our ExampleComp becomes eligible for VIP service only on having bought a software subscription. In our case, we might go for option number three. It is a very clean solution and also allows us to remove the user tag when the subscription is canceled. So how can we achieve this? Luckily, Zendesk offers a greatly documented and easy to understand API. We can therefore do the necessary research and forward our requirements to our developers. Before we look at any code, we should create a quick outline: User registers on ExampleComp's website: A Zendesk user is created. User subscribes to software package: The user tag is added to the existing Zendesk user. User unsubscribes from software package: The user tag is removed from the existing Zendesk user. User deletes account from ExampleComp's website: The Zendesk user is removed. All this can easily be achieved with a few lines of code. You may want to refer your developers to the following webpage: https://developer.zendesk.com/rest_api/docs/core/users If you have coding experience, here are the necessary code snippets: For creating a new end-user: curl -v -u {email_address}:{password} https://{subdomain}.zendesk.com/api/v2/users.json -H "Content-Type: application/json" -X POST -d '{"user": {"name": "FirstName LastName", "email": "user@example.org"}}' For updating an existing user: curl -v -u {email_address}:{password} https://{subdomain}.zendesk.com/api/v2/users/{id}.json -H "Content-Type: application/json" -X PUT -d '{"user": {"name": "Roger Wilco II"}}' Summary In this article, you learned about Zendesk users, roles, groups, organizations, and user tags. Following up on our road map, we laid important groundwork for a functioning Zendesk environment by setting up some of the basic requirements for more complex workflows. Resources for Article: Further resources on this subject: Deploying a Zabbix proxy [article] Inventorying Servers with PowerShell [article] Designing Puppet Architectures [article]
Read more
  • 0
  • 0
  • 2135

Packt
09 Dec 2016
4 min read
Save for later

What’s New in SQL Server 2016 Reporting Services

Packt
09 Dec 2016
4 min read
In this article by Robert C. Cain, coauthor of the book SQL Server 2016 Reporting Services Cookbook, we’ll take a brief tour of the new features in SQL Server 2016 Reporting Services. SQL Server 2016 Reporting Services is a true evolution in reporting technology. After making few changes to SSRS over the last several releases, Microsoft unveiled a virtual cornucopia of new features. (For more resources related to this topic, see here.) Report Portal The old Report Manager has received a complete facelift, along with many added new features. Along with it came a rename, it is now known as the Report Portal. The following is a screenshot of the new portal: KPIs KPIs are the first feature you’ll notice. The Report Portal has the ability to display key performance indicators directly, meaning your users can get important metrics at a glance, without the need to open reports. In addition, these KPIs can be linked to other report items such as reports and dashboards, so that a user can simply click on them to find more information. Mobile Reporting Microsoft recognized the users in your organization no longer use just a computer to retrieve their information. Mobile devices, such as phones and tablets, are now commonplace. You could, of course, design individual reports for each platform, but that would cause a lot of repetitive work and limit reuse. To solve this, Microsoft has incorporated a new tool, Mobile Reports. This allows you to create an attractive dashboard that can be displayed in any web browser. In addition, you can easily rearrange the dashboard layout to optimize for both phones and tablets. This means you can create your report once, and use it on multiple platforms. Below are three images of the same mobile report. The first was done via a web browser, the second on a tablet, and the final one on a phone: Paginated reports Traditional SSRS reports have now been renamed Paginated Reports, and are still a critical element in reporting. These provide the detailed information needed for day to day activities in your company. Paginated reports have received several enhancements. First, there are two new chart types, Sunburst and TreeMap. Reports may now be exported to a new format, PowerPoint. Additionally, all reports are now rendered in HTML 5 format. This makes them accessible to any browser, including those running on tablets or other platforms such as Linux or the Mac. PowerBI PowerBI Desktop reports may now be housed within the Report Portal. Currently, opening one will launch the PowerBI desktop application.However, Microsoft has announced in an upcoming update to SSRS 2016 PowerBI reports will be displayed directly within the Report Portal without the need to open the external app. Reporting applications Speaking of Apps, the Report Builder has received a facelift, updating it to a more modern user interface with a color scheme that matches the Report Portal. Report Builder has also been decoupled from the installation of SQL Server. In previous versions Report Builder was part of the SQL Server install, or it was available as a separate download. With SQL Server 2016, both the Report Builder and the Mobile Reporting tool are separate downloads making them easier to stay current as new versions are released. The Report Portal now contains links to download these tools. Excel Excel workbooks, often used as a reporting tool itself, may now be housed within the Report Portal. Opening them will launch Excel, similar to the way in which PowerBI reports currently work. Summary This article summarizes just some of the many new enhancements to SQL Server 2016 Reporting Services. With this release, Microsoft has worked toward meeting the needs of many users in the corporate environment, including the need for mobile reporting, dashboards, and enhanced paginated reports. For more details about these and many more features see the book SQL Server 2016 Reporting Services Cookbook, by Dinesh Priyankara and Robert C. Cain. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 2486
article-image-designing-system-center-configuration-manager-infrastructure
Packt
09 Dec 2016
10 min read
Save for later

Designing a System Center Configuration Manager Infrastructure

Packt
09 Dec 2016
10 min read
In this article by Samir Hammoudi and Chuluunsuren Damdinsuren, the authors of Microsoft System Center Configuration Manager Cookbook- Second Edition,we will cover the following recipes: What's changed from System Center 2012 Configuration Manager? System Center Configuration Manager's new servicing models In this article, we will learn the new servicing model, and walk through the various setup scenarios and configurations for System Center Configuration Manager Current Branch (SCCM CB). Designing and keeping a System Center Configuration Manager (SCCM) infrastructure current by using best practices such as keeping SQL server on the site, offloading some roles as needed, and in-place upgrades from CM12. What's changed from System Center 2012 Configuration Manager? We will go through the new features, changes, and removed features in CM since CM 2012. Getting ready The following are the new features in CM since CM12: In-console updates for Configuration Manager: CM uses an in-console service method called Updates and Servicing that makes it easy to locate and install updates for CM. Service Connection Point: The Microsoft Intune connector is replaced by a new site system role named Service Connection Point. The service connection point is used as a point of contact for devices you manage with, upload usage and diagnostic data to the Microsoft cloud service, and makes updates that apply within the CM console. Windows 10 Servicing: You can view the dashboard which tracks all Windows 10 PCs in your environment, create servicing plans to ensure Windows 10 PCs are kept up to date, and also view alerts when Windows 10 clients are near to the end of a CB/CBB support cycle. How to do it... Whats new in CM Capabilities This information is based on versions 1511 and 1602. You can find out if the change is made in 1602 or later by looking for the version 1602 or later tag. You can find the latest changes at https://technet.microsoft.com/en-us/library/mt757350.aspx. Endpoint Protection anti-malware: Real-time protection: This blocks potentially unwanted applications at download and prior to installation Scan settings: This scans mapped network drives when running a full scan Auto sample file submission settings: This is used to manage the behavior Exclusion settings: This section of the policy is improved to allow device exclusions Software updates: CM can differentiate a Windows 10 computer that connects to Windows Update for Business (WUfB) versus the computers connected to SUP You can schedule, or run manually, the WSUS clean up task from the CM console CM has the ability to manage Office 365 client updates by using the SUP (version 1602 or later) Application management: This supports Universal Windows Platform (UWP) apps The user-available apps now appear in Software Center When you create an in-house iOS app you only need to specify the installer (.ipa) file You can still enter the link directly, but you can now browse the store for the app directly from the CM console CM now supports apps you purchase in volume from the Apple Volume-Purchase Program (VPP) (version 1602 or later) Use CM app configuration policies to supply settings that might be required when the user runs an iOS app (version 1602 or later) Operating system deployment: A new task sequence (TS) type is available to upgrade computers from Windows 7/8/8.1 to Windows 10 Windows PE Peer Cache is now available that runs a TS using Windows PE Peer Cache to obtain content from a local peer, instead of running it from a DP You can now view the state, deploy the servicing plans, and get alerts of WaaS in your environment, to keep the Windows 10 current branch updated Client deployment: You can test new versions of the CM client before upgrading the rest of the site with the new software Site infrastructure: CM sites support the in-place upgrade of the site server's OS from Windows Server 2008 R2 to Windows Server 2012 R2 (version 1602 or later) SQL Server AlwaysOn is supported for CM (version 1602 or later) CM supports Microsoft Passport for Work which is an alternative sign-in method to replace a password, smart card, or virtual smart card Compliance settings: When you create a configuration item, only the settings relevant to the selected platform are available It is now easier to choose the configuration item type in the create configuration item wizard and has a number of new settings It provides support for managing settings on Mac OS X computers You can now specify kiosk mode settings for Samsung KNOX devices. (version 1602 or later) Conditional access: Conditional access to Exchange Online and SharePoint Online is supported for PCs managed by CM (version 1602 or later) You can now restrict access to e-mail and 0365 services based on the report of the Health Attestation Service (version 1602 or later) New compliance policy rules like automatic updates and passwords to unlock devices, have been added to support better security requirements (version 1602 or later) Enrolled and compliant devices always have access to Exchange On-Premises (version 1602 or later) Client management: You can now see whether a computer is online or not via its status (version 1602 or later) A new option, Sync Policy has been added by navigating to the Software Center | Options | Computer Maintenance which refreshes its machine and user policy (version 1602 or later) You can view the status of Windows 10 Device Health Attestation in the CM console (version 1602 or later) Mobile device management with Microsoft Intune: Improved the number of devices a user can enroll Specify terms and conditions users of the company portal must accept before they can enroll or use the app Added a device enrollment manager role to help manage large numbers of devices CM can help you manage iOS Activation Lock, a feature of the Find My iPhone app for iOS 7.1 and later devices (version 1602 or later) You can monitor terms and conditions deployments in the CM console (version 1602 or later) On-premises Mobile Device Management: You can now manage mobile devices using on-premises CM infrastructure via a management interface that is built into the device OS Removed features There are two features that were removed from CM current branch's initial release in December 2015, and there will be no more support on these features. If your organization uses these features, you need to find alternatives or stay with CM12. Out of Band Management: With Configuration Manager, native support for AMT-based computers from within the CM console has been removed. Network Access Protection: CM has removed support for Network Access Protection. The feature has been deprecated in Windows Server 2012 R2 and is removed from Windows 10. See also Refer to the TechNet documentation on CM changes at https://technet.microsoft.com/en-us/library/mt622084.aspx System Center Configuration Manager's new servicing models The new concept servicing model is one of the biggest changes in CM. We will learn what the servicing model is and how to do it in this article. Getting Ready Windows 10's new servicing models Before we dive into the new CM servicing model, we first need to understand the new Windows 10 servicing model approach called Windows as a Service (WaaS). Microsoft regularly gets asked for advice on how to keep Windows devices secure, reliable, and compatible. Microsoft has a pretty strong point-of-view on this: Your devices will be more secure, more reliable, and more compatible if you are keeping up with the updates we regularly release. In a mobile-first, cloud-first world, IT expects to have new value and new capabilities constantly flowing to them. Most users have smart phones and regularly accept the updates to their apps from the various app stores. The iOS and Android ecosystems also release updates to the OS on a regular cadence. With this in mind, Microsoft is committed to continuously rolling out new capabilities to users around the world, but Windows is unique in that it is used in an incredibly broad set of scenarios, from a simple phone to some of the most complex and mission critical use scenarios in factories and hospitals. It is clear that one model does not fit all of these scenarios. To strike a balance between the needed updates for such a wide range of device types, there are four servicing options (summarized in Table 1) you will want to completely understand. Table 1. Windows 10 servicing options (WaaS) Servicing Models Key Benefits Support Lifetime Editions Target Scenario Windows Insider Program Enables testing new features before release N/A Home, Pro, Enterprise, Education IT Pros, Developers Current Branch (CB) Makes new features available to users immediately Approximately 4 months Home, Pro, Enterprise, Education Consumers, limited number of Enterprise users Current Branch for Business (CBB) Provides additional testing time through Current Branch Approximately 8 months Pro, Enterprise, Education Enterprise users Long-Term Servicing Branch (LTSB) Enables long-term low changing deployments like previous Windows versions 10 Years Enterprise LTSB ATM, Line machines, Factory control How to do it... How will CM support Windows 10? As you read in the previous section, Windows 10 brings with it new options for deployment and servicing models. On the System Center side, it has to provide enterprise customers with the best management for Windows 10 with CM by helping you deploy, manage, and service Windows 10. Windows 10 comes in two basic types: a Current Branch/Current Branch for Business with fast version model, and the LTSB with a more traditional support model. Therefore, Microsoft has released a new version of CM to provide full support for the deployment, upgrade, and management of Windows 10 in December 2015. The new CM (simply without calendar year) is called Configuration Manager Current Branch (CMCB), and designed to support the much faster pace of updates for Windows 10, by being updated periodically. This new version will also simplify the CM upgrade experience itself. One of the core capabilities of this release is a brand new approach for updating the features and functionality of CM. Moving faster with CM will allow you to take advantage of the very latest feature innovations in Windows 10, as well as other operating systems such as Apple iOS and Android when using mobile device management (MDM) and mobile application management (MAM) capabilities. The new features for CM are in-console Updates-and-Servicing processes that replace the need to learn about, locate, and download updates from external sources. This means no more service packs or cumulative update versions to track. Instead, when you use the CM current branch, you periodically install in-console updates to get a new version. New update versions release periodically and will include product updates and can also introduce new features you may choose to use (or not use) in your deployment. Because CM will be updated frequently, will be denoted each particular version with a version number, for example 1511 for a version shipped in December 2015. Updates will be released for the current branch about three times a year. The first release of the current branch was 1511 in December 2015, followed by 1602 in March 2016. Each update version is supported for 12 months from its general availability release date. Why is there another version called Configuration Manager LTSB 2016? There will be a release named System Center Configuration Manager LTSB 2016 that aligns with the release of Windows Server 2016 and System Center 2016. With this version, as like previous versions 2007 and 2012, you do not have to update the Configuration Manager Site Servers like the current branch. Table 2. Configuration Manager Servicing Options: Servicing Options Benefits Support Lifetime Intended Target Clients CM CB Fully supports any type of Windows 10 Approximately 12 months Windows 10 CB/CBB, Windows 10 Configuration Manager LTSB 2016 You do not need to update frequently 10 Years Windows 10 LTSB Summary In this article we learned the new servicing model, and walked through the various setup scenarios and configurations for SCCM CB. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 4627

article-image-xamarinforms
Packt
09 Dec 2016
11 min read
Save for later

Xamarin.Forms

Packt
09 Dec 2016
11 min read
Since the beginning of Xamarin's lifetime as a company, their motto has always been to present the native APIs on iOS and Android idiomatically to C#. This was a great strategy in the beginning, because applications built with Xamarin.iOS or Xamarin.Android were pretty much indistinguishable from a native Objective-C or Java applications. Code sharing was generally limited to non-UI code, which left a potential gap to fill in the Xamarin ecosystem—a cross-platform UI abstraction. Xamarin.Forms is the solution to this problem, a cross-platform UI framework that renders native controls on each platform. Xamarin.Forms is a great framework for those that know C# (and XAML), but also may not want to get into the full details of using the native iOS and Android APIs. In this article by Jonathan Peppers, author of the book Xamarin 4.x Cross-Platform Application Development - Third Edition, we will discuss the following topics: Use XAML with Xamarin.Forms Cover data binding and MVVM with Xamarin.Forms (For more resources related to this topic, see here.) Using XAML in Xamarin.Forms In addition to defining Xamarin.Forms controls from C# code, Xamarin has provided the tooling for developing your UI in XAML (Extensible Application Markup Language). XAML is a declarative language that is basically a set of XML elements that map to a certain control in the Xamarin.Forms framework. Using XAML is comparable to what you would think of using HTML to define the UI on a webpage, with the exception that XAML in Xamarin.Forms is creating a C# objects that represent a native UI. To understand how XAML works in Xamarin.Forms, let's create a new page with lots of UI on it. Return to your HelloForms project from earlier, and open the HelloFormsPage.xaml file. Add the following XAML code between the <ContentPage> tag: <StackLayout Orientation="Vertical" Padding="10,20,10,10"> <Label Text="My Label" XAlign="Center" /> <Button Text="My Button" /> <Entry Text="My Entry" /> <Image Source="https://www.xamarin.com/content/images/ pages/branding/assets/xamagon.png" /> <Switch IsToggled="true" /> <Stepper Value="10" /> </StackLayout> Go ahead and run the application on iOS and Android, your application will look something like the following screenshots: First, we created a StackLayout control, which is a container for other controls. It can layout controls either vertically or horizontally one by one as defined by the Orientation value. We also applied a padding of 10 around the sides and bottom, and 20 from the top to adjust for the iOS status bar. You may be familiar with this syntax for defining rectangles if you are familiar with WPF or Silverlight. Xamarin.Forms uses the same syntax of left, top, right, and bottom values delimited by commas.  We also used several of the built-in Xamarin.Forms controls to see how they work: Label: We used this earlier in the article. Used only for displaying text, this maps to a UILabel on iOS and a TextView on Android. Button: A general purpose button that can be tapped by a user. This control maps to a UIButton on iOS and a Button on Android. Entry: This control is a single-line text entry. It maps to a UITextField on iOS and an EditText on Android. Image: This is a simple control for displaying an image on the screen, which maps to a UIImage on iOS and an ImageView on Android. We used the Source property of this control, which loads an image from a web address. Using URLs on this property is nice, but it is best for performance to include the image in your project where possible. Switch: This is an on/off switch or toggle button. It maps to a UISwitch on iOS and a Switch on Android. Stepper: This is a general-purpose input for entering numbers via two plus and minus buttons. On iOS this maps to a UIStepper, while on Android Xamarin.Forms implements this functionality with two Buttons. This are just some of the controls provided by Xamarin.Forms. There are also more complicated controls such as the ListView and TableView you would expect for delivering mobile UIs. Even though we used XAML in this example, you could also implement this Xamarin.Forms page from C#. Here is an example of what that would look like: public class UIDemoPageFromCode : ContentPage { public UIDemoPageFromCode() { var layout = new StackLayout { Orientation = StackOrientation.Vertical, Padding = new Thickness(10, 20, 10, 10), }; layout.Children.Add(new Label { Text = "My Label", XAlign = TextAlignment.Center, }); layout.Children.Add(new Button { Text = "My Button", }); layout.Children.Add(new Image { Source = "https://www.xamarin.com/content/images/pages/ branding/assets/xamagon.png", }); layout.Children.Add(new Switch { IsToggled = true, }); layout.Children.Add(new Stepper { Value = 10, }); Content = layout; } } So you can see where using XAML can be a bit more readable, and is generally a bit better at declaring UIs. However, using C# to define your UIs is still a viable, straightforward approach. Using data-binding and MVVM At this point, you should be grasping the basics of Xamarin.Forms, but are wondering how the MVVM design pattern fits into the picture. The MVVM design pattern was originally conceived for its use along with XAML and the powerful data binding features XAML provides, so it is only natural that it is a perfect design pattern to be used with Xamarin.Forms. Let's cover the basics of how data-binding and MVVM is setup with Xamarin.Forms: Your Model and ViewModel layers will remain mostly unchanged from the MVVM pattern. Your ViewModels should implement the INotifyPropertyChanged interface, which facilitates data binding. To simplify things in Xamarin.Forms, you can use the BindableObject base class and call OnPropertyChanged when values change on your ViewModels. Any Page or control in Xamarin.Forms has a BindingContext, which is the object that it is data bound to. In general, you can set a corresponding ViewModel to each view's BindingContext property. In XAML, you can setup a data binding by using syntax of the form Text="{Binding Name}". This example would bind the Text property of the control to a Name property of the object residing in the BindingContext. In conjunction with data binding, events can be translated to commands using the ICommand interface. So for example, a Button's click event can be data bound to a command exposed by a ViewModel. There is a built-in Command class in Xamarin.Forms to support this. Data binding can also be setup from C# code in Xamarin.Forms via the Binding class. However, it is generally much easier to setup bindings from XAML, since the syntax has been simplified with XAML markup extensions. Now that we have covered the basics, let's go through step-by-step and to use Xamarin.Forms. For the most part we can reuse most of the Model and ViewModel layers, although we will have to make a few minor changes to support data binding from XAML. Let's begin by creating a new Xamarin.Forms application backed by a PCL named XamSnap: First create three folders in the XamSnap project named Views, ViewModels, and Models. Add the appropriate ViewModels and Models. Build the project, just to make sure everything is saved. You will get a few compiler errors we will resolve shortly. The first class we will need to edit is the BaseViewModel class, open it and make the following changes: public class BaseViewModel : BindableObject { protected readonly IWebService service = DependencyService.Get<IWebService>(); protected readonly ISettings settings = DependencyService.Get<ISettings>(); bool isBusy = false; public bool IsBusy { get { return isBusy; } set { isBusy = value; OnPropertyChanged(); } } } First of all, we removed the calls to the ServiceContainer class, because Xamarin.Forms provides it's own IoC container called the DependencyService. It has one method, Get<T>, and registrations are setup via an assembly attribute that we will setup shortly. Additionally we removed the IsBusyChanged event, in favor of the INotifyPropertyChanged interface that supports data binding. Inheriting from BindableObject gave us the helper method, OnPropertyChanged, which we use to inform bindings in Xamarin.Forms that the value has changed. Notice we didn't pass a string containing the property name to OnPropertyChanged. This method is using a lesser-known feature of .NET 4.0 called CallerMemberName, which will automatically fill in the calling property's name at runtime. Next, let's setup our needed services with the DependencyService. Open App.xaml.cs in the root of the PCL project and add the following two lines above the namespace declaration: [assembly: Dependency(typeof(XamSnap.FakeWebService))] [assembly: Dependency(typeof(XamSnap.FakeSettings))] The DependencyService will automatically pick up these attributes and inspect the types we declared. Any interfaces these types implement will be returned for any future callers of DependencyService.Get<T>. I normally put all Dependency declarations in the App.cs file, just so they are easy to manage and in one place. Next, let's modify LoginViewModel by adding a new property: public Command LoginCommand { get; set; } We'll use this shortly for data binding a button's command. One last change in the view model layer is to setup INotifyPropertyChanged for the MessageViewModel: Conversation[] conversations; public Conversation[] Conversations { get { return conversations; } set { conversations = value; OnPropertyChanged(); } } Likewise, you could repeat this pattern for the remaining public properties throughout the view model layer, but this is all we will need for this example. Next, let's create a new Forms ContentPage Xaml file under the Views folder named LoginPage. In the code-behind file, LoginPage.xaml.cs, we'll just need to make a few changes: public partial class LoginPage : ContentPage { readonly LoginViewModel loginViewModel = new LoginViewModel(); public LoginPage() { Title = "XamSnap"; BindingContext = loginViewModel; loginViewModel.LoginCommand = new Command(async () => { try { await loginViewModel.Login(); await Navigation.PushAsync(new ConversationsPage()); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } }); InitializeComponent(); } } We did a few important things here, including setting the BindingContext to our LoginViewModel. We setup the LoginCommand, which basically invokes the Login method and displays a message if something goes wrong. It also navigates to a new page if successful. We also set the Title, which will show up in the top navigation bar of the application. Next, open LoginPage.xaml and we'll add the following XAML code inside the ContentPage's content: <StackLayout Orientation="Vertical" Padding="10,10,10,10"> <Entry Placeholder="Username" Text="{Binding UserName}" /> <Entry Placeholder="Password" Text="{Binding Password}" IsPassword="true" /> <Button Text="Login" Command="{Binding LoginCommand}" /> <ActivityIndicator IsVisible="{Binding IsBusy}" IsRunning="true" /> </StackLayout> This will setup the basics of two text fields, a button, and a spinner complete with all the bindings to make everything work. Since we setup the BindingContext from the LoginPage code behind, all the properties are bound to the LoginViewModel. Next, create a ConversationsPage as a XAML page just like before, and edit the ConversationsPage.xaml.cs code behind: public partial class ConversationsPage : ContentPage { readonly MessageViewModel messageViewModel = new MessageViewModel(); public ConversationsPage() { Title = "Conversations"; BindingContext = messageViewModel; InitializeComponent(); } protected async override void OnAppearing() { try { await messageViewModel.GetConversations(); } catch (Exception exc) { await DisplayAlert("Oops!", exc.Message, "Ok"); } } } In this case, we repeated a lot of the same steps. The exception is that we used the OnAppearing method as a way to load the conversations to display on the screen. Now let's add the following XAML code to ConversationsPage.xaml: <ListView ItemsSource="{Binding Conversations}"> <ListView.ItemTemplate> <DataTemplate> <TextCell Text="{Binding UserName}" /> </DataTemplate> </ListView.ItemTemplate> </ListView> In this example, we used a ListView to data bind a list of items and display on the screen. We defined a DataTemplate class, which represents a set of cells for each item in the list that the ItemsSource is data bound to. In our case, a TextCell displaying the Username is created for each item in the Conversations list. Last but not least, we must return to the App.xaml.cs file and modify the startup page: MainPage = new NavigationPage(new LoginPage()); We used a NavigationPage here so that Xamarin.Forms can push and pop between different pages. This uses a UINavigationController on iOS, so you can see how the native APIs are being used on each platform. At this point, if you compile and run the application, you will get a functional iOS and Android application that can login and view a list of conversations: Summary In this article we covered the basics of Xamarin.Forms and how it can be very useful for building your own cross-platform applications. Xamarin.Forms shines for certain types of apps, but can be limiting if you need to write more complicated UIs or take advantage of native drawing APIs. We discovered how to use XAML for declaring our Xamarin.Forms UIs and understood how Xamarin.Forms controls are rendered on each platform. We also dove into the concepts of data binding and how to use the MVVM design pattern with Xamarin.Forms. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Where Is My Data and How Do I Get to It? [article] Configuring and Managing the Mailbox Server Role [article]
Read more
  • 0
  • 0
  • 24378
Modal Close icon
Modal Close icon