Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-cross-platform-building
Packt
10 Aug 2015
11 min read
Save for later

Cross-platform Building

Packt
10 Aug 2015
11 min read
In this article by Karan Sequeira, author of the book Cocos2d-x Game Development Blueprints, we'll leverage the awesome aspect of Cocos2d-x to build one of our games on Android and Windows Phone 8! (For more resources related to this topic, see here.) Setting up the environment for Android At this point in the timeline of technological evolution, Android needs no introduction. This mobile operating system was acquired by Google, and it has reached far and wide across the globe. It is now one of the top choices for application developers and game developers. With octa-core CPUs and ever-powerful GPUs, the sheer power offered by Android devices is a motivating factor! While setting up the environment for Android, you have more choices than any other mobile development platform. Your workstation could be running any of the three major operating systems (Windows, Mac OS, or Linux) and you would be able to build to Android just fine. Since Android is not fussy about its build environment, developers mostly choose their work environment based on which other platforms they will be developing for. As such, you might choose to build for Android on a machine running Mac OS since you would be able to build for iOS and Android on the same machine. The same applies for a machine running Windows as well. You would be able to build for both Android and Windows Phone. Although building for Windows Phone 8 requires you to have at least Windows 8 installed. We will discuss more on that later. Let's begin listing down the various software required to set up the environment for Android. Java Development Kit 7+ Since you already know that Java is the programming language used within the Android SDK, you must ensure that you have the environment set up to compile and run Java files. So go ahead and download the Java Development Kit (JDK)version 6 or later. You can download and install a Standard Edition (SE) version from the page available at the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html Mac OS comes with JDK installed and as such, you won't have to follow this step if you're setting up your development environment on a Mac. The Android SDK Once you've downloaded JDK, it's time to download the Android SDK from the following URL: http://developer.android.com/sdk/index.html If you're installing the Android SDK on Windows, a custom installer is provided that will take care of downloading and setting up the required parts of the Android SDK for you. For other operating systems, you can choose to download the respective archive files and extract them at the location of your choice. Eclipse or the ADT bundle Eclipse is the most commonly used IDE when it comes to Android application development. You can choose to download a standard Eclipse IDE for Java developers and then install the ADT plugin into Eclipse, or you can download the ADT bundle, which is a specialized version of Eclipse with the ADT plugin preinstalled. At the time of writing this article, the Android developer site had already deprecated ADT in favor of Android Studio. As such, we will choose the former approach for setting up our environment in Eclipse. You can download and install the standard Eclipse IDE for Java Developers for your specific machine from the following URL: http://www.eclipse.org/downloads/ ADT plugin for Eclipse Once you've downloaded Eclipse, you must now install a custom plugin for Eclipse: Android Development Tools (ADT). Visit the following URL and follow the detailed instructions that will help you install the ADT plugin into Eclipse: http://developer.android.com/sdk/installing/installing-adt.html Once you've followed the instructions on the preceding page, you will need to inform Eclipse about the location of the Android SDK that you downloaded earlier. So, open up the Preferences page for Eclipse and go to the location where you've placed the Android SDK in the Android section. With that done, we can now fire up the SDK Manager to install a few more necessary pieces of software. To launch the Android SDK Manager, select Android SDK Manager from the Windows menu in Eclipse. The resultant window should look something like this: By default, you will see a whole lot of packages selected, out of which Android SDK Platform-tools and Android SDK Build-tools are necessary. From the rest, you must select at least one of the target Android platforms. An additional package will be required if you're target environment is Windows: Google USB Driver. It is located under the Extras list. I would suggest skipping downloading the documentation and samples. If you already have an Android device, I would go one step further and suggest you skip downloading the system images as well. However, if you don't have an Android device, you will need at least one system image so that you can at least test on an emulator. Once you've chosen from the various platforms needed, proceed to install the packages and you get a window like this: Now, you must select Accept License and click on the Install button to install the respective packages. Once these packages have been installed, you have to add their locations to the path variable on your respective machines. For Windows, modify your path variable (go to Properties | Advance Settings | Environment Variables) to include the following: ;E:Androidandroid-sdkplatform-tools For Mac OS, you can add the following line to the .bash_profile file found under the home directory: export PATH=$PATH:/Android/android-sdk/platform-tools/ The preceding line can also be added to the .bash_rc file found under the home directory on your Linux machine. At this point, you can use Eclipse for Android development. Installing Cygwin for Windows Developers working on Linux can skip this step as most Linux distributions come with the make utility. Also, developers working on Mac OS may download Xcode from the Mac App Store, which will install the make utility on their respective Macs. We need to install Cygwin on Windows specifically for the GNU make utility. So, go to the following URL and download the installer for Cygwin: http://www.cygwin.com/install.html Once you've run the .exe file that you downloaded and get a window like this, click on the Next button: The next window will ask how you would like to install the required packages. Here, select option Install from Internet and click on Next: The next window will ask where you would like to install Cygwin. I'd recommend leaving it at the default value unless you have a reason to change it. Proceed by clicking on Next. In the next window, you will be asked to specify a path where the installation can download the files it requires. You can fill in a suitable path of your choice in the box and click on Next. In the next window, you will be asked to specify your Internet connection. Leave it at the Direct Connection option and click on Next. In the next window, you will be asked to select a mirror location from where to download the installation files. Here, select the site that is geographically closest to you and click on Next. In the window that follows, expand the Devel section and search for make: The GNU version of the 'make' utility. Click on the Skip option to select this package. The version of the make utility that will be installed is now displayed in place of Skip. Your window should look something like this: You can now go ahead and click the Next button to begin the download and installation of the required packages. The window should look something like this: Once all the packages have been downloaded, click on Finish to close the installation. Now that we have the make utility installed, we can go ahead and download the Android NDK, which will actually build our entire C++ code base. The Android NDK To download the Android NDK for your respective development machine, navigate to the following URL: https://developer.android.com/tools/sdk/ndk/index.html Unzip the downloaded archive and place it in the same location as the Android SDK. We must now add an environment variable named NDK_ROOT that points to the root of the Android NDK. For Windows, add a new user variable NDK_ROOT with the location of the Android NDK on your filesystem as its value. You can do this by going to Properties | Advance Settings | Environment Variables. Once you've done that, the Environment Variables window should look something like this: I'm sure you noticed the value of the NDK_ROOT variable in the previous screenshot. The value of this variable is given in Unix style and depends on the Cygwin environment, since it will be accessed within a Cygwin bash shell while executing the build script for each Android project. Mac OS and Linux users can add the following line to their .bash_profile and .bashrc files, respectively: export NDK_ROOT=/Android/android-ndk-r10 We have now successfully completed setting up the environment to build our Cocos2d-x games on Android. To test this, open up a Cygwin bash terminal (for Windows) or a standard terminal (for Mac OS or Linux) and navigate to the Cocos2d-x test bed located inside the samples folder of your Cocos2d-x source. Now, navigate to the proj.android folder and run the build_native.sh file. This is what my Cygwin bash terminal looks like on a Windows 7 machine: If you've followed the aforementioned instructions correctly, the build_native.sh script will then go on to compile the C++ source files required by the TestCpp project and will result in a single shared object (.so) file in the libs folder within the proj.android folder. Creating an Android Virtual Device We're close to running the game, but we need to create an Android Virtual Device (AVD) before we proceed. Open up the Android Virtual Device Manager from the Windows menu and click on Create.   In the next window, fill in the required details as per your requirements and configuration and click OK. This is what my window looks like with everything filled in: From the Android Virtual Device Manager window, select the newly created AVD and click on Start to boot it. Building the tests on Android With an Android device that is ready to run our project, let's begin by first importing the project into Eclipse. Within Eclipse, select File | Import.... In the following window, select Existing Projects into Workspace under the General setting and click on Next: In the next window, browse to the proj.android folder under the cocos2d-x-2.2.5samplesCppTestCpp path and click on Finish: Once imported, you can find the TestCpp project under Package Explorer. It should look something like this: As you can see, there are a few errors with the project. If you look at the Problems view (Window | Show View | Problems) located on the bottom-half of Eclipse, you might see something like this: All these errors are due to the fact that the Android project for our game depends on Cocos2d-x's Android project for Android-specific functionality, things such as the actual OpenGL surface where everything is rendered, the music player, accelerometer functionality, and many more. So let's import the Android project for Cocos2d-x located inside the following path in your Cocos2d-x source bundle: cocos2d-x-2.2.5cocos2dxplatformandroid You can import it the same way you imported TestCpp. Once the project has been imported, it will be titled libcocos2dx in Package Explorer. Now, select Clean... from the Project menu; You will notice that when the clean operation has finished, the pumpkindefense dependency on libcocos2dx is taken care of and the project for pumpkindefense builds error-free. Running the tests on Android Running the tests is as simple as right-clicking on the TestCpp project in Package Explorer and selecting Run As | Android Application. It might take a bit more time running on an emulator as compared to an actual device, but ultimately you will have something like this: Summary In this article, you learned what necessary software components are needed to set up your workstation to build and run an Android native application. You had also set up an Android Virtual Device and ran the Cocos2d-x test bed application on it. Resources for Article: Further resources on this subject: Run Xcode Run [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] Creating Cool Content [article]
Read more
  • 0
  • 0
  • 21598

article-image-data-types-and-fields
Packt
10 Aug 2015
30 min read
Save for later

Data Types and Fields

Packt
10 Aug 2015
30 min read
In this article by David Studebaker and Christopher Studebaker, authors of the book Programming Microsoft Dynamics™ NAV 2015, explain the design of an application should begin at the simplest level, with the design of the data elements. The type of data our development tool supports has a significant effect on our design. Because NAV is designed for financially-oriented business applications, NAV data types are financially and business oriented. In this article, we will cover many of the data types we use within NAV. For each data type, we will cover some of the more frequently modified field properties and how particular properties, such as Field Class, are used to support application functionality. Field Class is a fundamental property which defines whether the contents of the field are data to be processed or control information to be interpreted. (For more resources related to this topic, see here.) Data types We are going to segregate the data types into several groups. We will first look at Fundamental data types and then at Complex data types. Fundamental data types Fundamental data types are the basic components from which the complex data types are formed. They are grouped into Numeric, String, and Date/Time data types. Numeric data Just like other systems, Microsoft Dynamics NAV 2015 supports several numeric data types. The specifications for each NAV data type are defined for NAV, independent of the supporting SQL Server database rules. However, some data types are stored and handled somewhat differently from a SQL Server point of view than the way they appear to us as NAV developers and users. For more details on the SQL Server-specific representations of various data elements, refer to the Developer and IT Pro Help. Our discussion will focus on NAV representation and handling for each data type. The various numeric data types are as follows: Integer: This is an integer number ranging from -2,147,483,646 to +2,147,483,647 Decimal: This is a decimal number in the range of +/- 999,999,999,999,999.99. Although it is possible to construct larger numbers, errors such as overflow, truncation, or loss of precision might occur. In addition, there is no facility to display or edit larger numbers. Option: This is a special instance of an integer, stored as an integer number ranging from 0 to +2,147,483,647. An option is normally represented in the body of our C/AL code as an option string. We can compare an option to an integer in C/AL rather than using the option string. However, this is not a good practice because it eliminates the self-documenting aspect of an option field. An option string is a set of choices listed in a comma-separated string, one of which is chosen and stored as the current option. Since the maximum length of this string is 250 characters, the practical maximum number of choices for a single option is less than 125. The currently selected choice within the set of options is stored in the option field as the ordinal position of that option within the set. For example, selection of an entry from the option string of red, yellow, and blue would result in the storing of 0 (red), 1 (yellow), and 2 (blue). If red were selected, 0 would be stored in the variable and if blue were selected, 2 would be stored. Quite often, an option string starts with a blank to allow an effective choice of "none chosen". An example of this (blank, Hourly, Daily,…) is as follows: Boolean: A Boolean variable is stored as 1 or 0. In a C/AL code, it is programmatically referred to as True or False, but sometimes, it is referred in properties as Yes or No. Boolean variables may be displayed as Yes or No (language dependent), P or blank, or True or False. BigInteger: 8-byte Integer as opposed to the 4 bytes of Integer. BigIntegers are for very big numbers (from -9,223,372,036,854,775,807 to 9,223,372,036,854,775,807). Char: A numeric code between 0 and 65535 (hexadecimal FFFF) representing a single 16-bit Unicode character. Char variables can operate either as text or numbers. Numeric operations can be done on Char variables. Char variables can also be defined with individual text character values. Char variables cannot be defined as permanent variables in a table; they can only be defined as working storage variables within C/AL objects. Byte: This is a single 8-bit ASCII character with a value ranging from 0 to 255. Byte variables can operate either as text or numbers. Numeric operations can be done on Byte variables. Byte variables can also be defined with individual text character values. Byte variables cannot be defined as permanent variables in a table, but only as working storage variables within C/AL objects. Action: This is a variable returned from a PAGE RUNMODAL function or RUNMODAL (Page) function that specifies what action a user performs on a page. The possible values are OK, Cancel, LookupOK, LookupCancel, Yes, No, RunObject, and RunSystem. ExecutionMode: This specifies the mode in which a session runs. The possible values are Debug or Standard. String data The following are the data types included in String data: Text: This contains any string of alphanumeric characters. In a table, a Text field can be from 1 to 250 characters long. In working storage within an object, a Text variable can be any length if no length is defined. If a maximum length is defined, it must not exceed 1024. NAV 2015 does not require a length to be specified, but if we define a maximum length, it will be enforced. When calculating the length of a record for design purposes (relative to the maximum record length of 8,000 bytes), the full defined field length should be counted. Code: Although the Help says that the length constraints for Code variables are the same as those for text variables, the C/AL Editor enforces length limits of 1 to 250 characters. All of the letters are automatically converted to uppercase when data is entered into a Code variable; any leading or trailing spaces are removed. Date/Time data The following are the data types included in Date/Time data: Date: This contains an integer number, which is interpreted as a date ranging from January 1, 1754 to December 31, 9999. A 0D (numeral zero, letter D) represents an undefined date (stored as a SQL Server DateTime field) that is interpreted as January 1, 1753. According to the Developer and IT Pro Help that, NAV 2015 supports a Date of 1/1/0000 (presumably as a special case for backward compatibility, but it is not supported by SQL Server). A date constant can be written as the letter D preceded by either six digits in the format MMDDYY or eight digits as MMDDYYYY (where M = month, D = day, and Y = year). For example, 011915D or 01192015D both represent January 19, 2015. Later, in DateFormula, we will find D interpreted as day, but here the trailing D is interpreted as the date (data type) constant. When the year is expressed as YY rather than YYYY, the century portion (in this case, 20) is 20 if the two digit year is from 00 to 29, or 19 if the year is from 30 through 99. NAV also defines a special date called the Closing date, which represents the point in time between one day and the next. The purpose of a closing date is to provide a point at the end of a day, after all of the real date- and time-sensitive activity is recorded—the point when accounting closing entries can be recorded. Closing entries are recorded, in effect, at the stroke of midnight between two dates—this is the date of closing accounting books, and it is designed so that one can include or exclude, at the user's option, closing entries in various reports. When sorted by date, the closing date entries will get sorted after all normal entries for a day. For example, the normal date entry for December 31, 2015 would display as 12/31/15 (depending on the date format masking), and the closing date entry would display as C12/31/15. All of the C12/31/15 ledger entries would appear after all normal 12/31/15 ledger entries. The following screenshot shows two 2014 closing date entries mixed with normal entries from December 2015 and January through April 2015. (This data is from Cronus demo. The 2014 Closing entries have an "Opening Entry" description, which shows that these were the first entries for the demo data in the respective accounts. This is not a normal set of production data.) Time: This contains an integer number, which is interpreted on a 24-hour clock, in milliseconds plus 1, from 00:00:00 to 23:59:59:999. A 0T (numeral zero, letter T) represents an undefined time and is stored as 1/1/1753 00:00:00.000. DateTime: This represents a combined Date and Time, stored in Coordinated Universal Time (UTC), and it always displays local time (that is, the local time on our system). DateTime fields do not support NAV "Closing" dates. DateTime is helpful for an application that must support multiple time zones simultaneously. DateTime values can range from January 1, 1754 00:00:00.000 to December 31, 9999 23:59:59.999, but dates earlier than January 1, 1754 cannot be entered (don't test with dates late in 9999 as an intended advance to the year 10000 won't work). Assigning a date of 0DT will yield an undefined or blank DateTime. Duration: This represents the positive or negative difference between two DateTime values, in milliseconds, stored as a BigInteger. Durations are automatically output in the text format as DDD days HH hours MM minutes SS seconds. Complex data types Each complex data type consists of multiple data elements. For ease of reference, we will categorize them into several groups of similar types. Data structure The following data types are in the data structure group: File: This refers to any standard Windows file outside the NAV database. There is a reasonably complete set of functions to allow to create, delete, open, close, read, write, and copy (among other things) data files. For example, we could create our own NAV routines in C/AL to import or export data from or to a file that had been created by some other application. With the three-tier architecture of NAV 2015, business logic runs on the server and not the client. We need to keep this in mind any time we refer to local external files because they will be on the server by default. Use of Universal Naming Convention (UNC) paths can make this easier to manage. Record: This refers to a single data row within a NAV table that consists of individual fields. Quite often, multiple variable instances of a Record (table) are defined in working storage to support a validation process, allowing access to different records within the table at one time in the same function. Objects Page, Report, Codeunit, Query, and XMLPort, each represents an object data type. Object data types are used when there is a need to refer to an object or a function in another object. Examples: Invoking a Report or an XMLPort from a Page or a Report Calling a function for data validation or processing is coded as a function in a Table or a Codeunit Automation The following are Automation data types. (these are not supported by the NAV Web client.) OCX and Automation data types are supported in NAV 2015 for backward compatibility only: OCX: This allows the definition of a variable that represents and allows access to an ActiveX or OCX custom control. Such a control is typically an external application object that we can invoke from our NAV object. Automation: This allows us to define a variable that we can access similar to an OCX. The application must act as an Automation Server and be registered with the NAV client or server that calls it. For example, we can interface from NAV into the various Microsoft Office products (Word, Excel, and so on) by defining them in Automation variables. DotNet: This allows us to define a variable for .NET Framework interface types within an assembly. It supports accessing .NET Framework type members, including methods, properties, and constructors from C/AL. These can be members of the global assembly cache or custom assemblies. Input/Output The following are the Input/Output data types: Dialog: This supports the definition of a simple user interface window without the use of a Page object. Typically, Dialog windows are used to communicate processing progress or allow a brief user response to a go/no-go question, though this latter use could result in bad performance due to locking. There are other user communication tools as well, but they do not use a Dialog type data item. InStream and Outstream: These allow us to read from and write to external files, BLOBS, and objects of the Automation and OCX data types. DateFormula DateFormula provides for the definition and storage of a simple, but clever, set of constructs to support the calculation of runtime-sensitive dates. A DateFormula is stored in a nonlanguage dependent format, thus supporting multilanguage functionality. A DateFormula is a combination of: Numeric multipliers (for example, 1, 2, 3, 4, and so on) Alpha time units (all must be in uppercase) D for a day W for a week WD for day of the week, that is, from day 1 to day 7 (either in the future or in the past but not today). Monday is day 1 and Sunday is day 7. M for calendar month Y for year CM for current month, CY for current year, CW for current week Math symbols interpretation + (plus) as in CM + 10D means the Current Month end plus 10 Days (in other words, the tenth of the next month) – (minus) as in (-WD3) means the date of the previous Wednesday (which is the 3rd day of the past week). Positional notation (D15 means the 15th day of the month and 15D means 15 days) Payment Terms for Invoices support full use of DateFormula. All DateFormula results are expressed as a date based on a reference date. The default reference date is the system date and not the Work Date. Here are some sample DateFormulas and their interpretations (displayed dates are based on the US calendar) with a reference date of July 10, 2015, a Friday: CM is the last day of Current Month, 07/31/15 CM + 10D is the tenth of the next month, 08/10/15 WD6 is the next sixth day of the week, 07/11/15 WD5 is the next fifth day of the week, 07/17/15 CM – M + D is the end of the current month minus one month plus one day, 07/01/15 CM – 5M is the end of the current month minus five months, 02/28/15 Let us take the opportunity to use the DateFormula data type to learn a few NAV development basics. We will do so by experimenting with some hands-on evaluations of several DateFormula values. We will create a table to calculate dates using DateFormula and Reference Dates. To do this, navigate to Tools | Object Designer | Tables. Then, click on the New button and define the fields shown in the following screenshot. Save it as Table 50009, named Date Formula Test. After we are done with this test, we will save this table for some later testing. Now, we will add some simple C/AL code to our table so that when we enter or change either the Reference Date or the DateFormula data, we can calculate a new result date. First, access the new table via the Design button. Then, go to the global variables definition form through the View menu option, the C/AL Globals sub-option, and finally, choose the Functions tab. Type in our new function name as CalculateNewDate on the first blank line, as shown in the following screenshot, and then exit (by means of the Esc key) from this form back to the list of data fields: From the Table Designer form that displays the list of data fields, either press F9 or click on the C/AL Code icon: This will take us to the following screen, where we can see all of the field triggers plus the trigger for the new function that we just defined. The table triggers will not be visible, unless we scroll up to show them. Note that our new function was defined as a LOCAL function. This means that it cannot be accessed from another object unless we change it to a GLOBAL function. Since our goal now is to focus on experimenting with the DateFormula, we will not go into detail and explain the logic of what we are creating. The logic that we're going to code is as follows: When an entry is made (new or changed) in either the "Reference Date" field or in the "Date Formula to Test field", invoke the CalculateNewDate function to calculate a new “Result Date” value based on the entered data. First, you need to create the logic within our new function, CalculateNewDate(), to evaluate and store a Date Result based on the DateFormula and Reference Date that you enter into the table. Just copy the C/AL code exactly as shown in the following screenshot, exit, compile, and save the table: If you get an error message of any type when you close and save the table, you probably have not copied the C/AL code exactly as it is shown in the screenshot. (also shown below for ease of copying.) CalculateNewDate;"Date Result" := CALCDATE("Date Formula to Test","Reference Date for Calculation"); This code will cause the CalculateNewDate()function to be called via the OnValidate trigger when an entry is made in either the Reference Date for Calculation or the Date Formula to Test fields. The function will place the result in the Date Result field. The use of an integer value in the redundantly named Primary Key field allows us to enter any number of records into the table (by manually numbering them 1, 2, 3, and so forth). Let's experiment with several different date and date formula combinations. We will access the table via the Run button. This will cause NAV to generate a default format page and run it in the Role Tailored Client. Enter a Primary Key value of 1 (one). In Reference Date for Calculation, enter either an upper or lower case T for Today and the system date. The same date will appear in the Date Result field because at this point, no Date Formula has been entered. Now, enter 1D (number 1 followed by uppercase or lowercase D (C/SIDE will make it uppercase) in the Date Formula to Test field. We will see that the Date Result field contents are changed to be one day beyond the date in the Reference Date for Calculation field. Now, for another test entry, start with a 2 in the Primary Key field. Again, enter the letter T (for Today) in the Reference Date for Calculation field, and enter the letter W (for Week) in the Date Formula to Test field. We will get an error message telling us that our formulas should include a number. Make the system happy and enter 1W. We will now see a date in the Date Result field that is one week beyond our system date. Set the system's Work Date to a date in the middle of a month. Start another line with the number 3 as the Primary Key, followed by a W (for Work Date) in the Reference Date for Calculation field. Enter cm (or CM or cM or Cm, it doesn't matter) in the Date Formula to Test field. Our result date will be the last day of our Work Date month. Now, enter another line using the Work Date, but enter a formula of –cm (the same as before but with a minus sign). This time, our result date will be the first day of our Work Date month. Note that the DateFormula logic handles month end dates correctly, including a leap year. Try starting with a date in the middle of February 2016 to confirm this. The following screen shows the Date Formula Test window: Now, enter another line with a new Primary Key. Skip over the Reference Date for Calculation field and just enter 1D in the Date Formula to Test field. So, what happens when you do this? We get an error message stating that "You cannot base a date calculation on an undefined date." In other words, NAV cannot make the requested calculation without a Reference Date. Before we put this function into production, we want our code to check for a Reference Date before calculating. We could default an empty date to the System Date or the Work Date and avoid this particular error. The preceding and following screenshots show different sample calculations. Build on these and then experiment. We can create a variety of different algebraic date formulae and get some very interesting results. One NAV user has due dates on Invoices for the tenth of the next month. Invoices are dated at various times during the month than they are actually printed. By using the DateFormula of CM + 10D, the due date is always automatically calculated to be the tenth of the next month. Don't forget to test with WD (weekday), Q (quarter), and Y (year) as well as D (day), W (week), and M (month). For our code to be language independent, we should enter the date formulae with < > delimiters around them (for example, <1D+1W>). NAV will translate the formula into the correct language codes using the installed language layer. Although our focus for the work we just completed was the Date Formula data type, we've accomplished a lot more than simply learning about that one data type: We created a new table just for the purpose of experimenting with a C/AL feature that we might use. This is a technique that comes in handy when we are learning a new feature or trying to decide how it works or how we might use it. We put some critical OnValidate logic in the table. When data is entered in one area, the entry is validated and, if valid, the defined processing is done instantly. We created a common routine as a new LOCAL function. This function is then called from all the places to which it applies. We did our entire test with a table object and a default tabular page that is automatically generated when we Run a table. We didn't have to create a supporting structure to do our testing. Of course, when we design a change to a complicated existing structure, we will have a more complicated testing scenario. One of our goals will always be to simplify our testing scenarios, both to minimize the setup effort and to keep our test narrowly focused on the specific issue. Finally, and most specifically, we saw how NAV tools make a variety of relative date calculations easy. These are very useful in business applications, many aspects of which are date centered. References and other data types The following data types are used for advanced functionality in NAV, sometimes supporting an interface with an external object: RecordID: This contains the object number and primary key of a table. RecordRef: This identifies a row in a table, a record. RecordRef can be used to obtain information about the table, the record, the fields in the record, and the currently active filters on the table. FieldRef: This identifies a field in a table; thus, it allows access to the contents of that field. KeyRef: This identifies a key in a table and the fields in that key. Since the specific record, field, and key references are assigned at runtime, RecordRef, FieldRef, and KeyRef are used to support logic which can run on tables that are not specified at design time. This means that one routine built on these data types can be created to perform a common function for a variety of different tables and table formats. Variant: This defines variables that are typically used to interface with Automation and OCX objects. Variant variables can contain data of various C/AL data types to pass them to an Automation or OCX object as well as external Automation data types that cannot be mapped to C/AL data types. TableFilter: For variables which can only be used for setting security filters from the Permissions table. Transaction Type: This has optional values of UpdateNoLocks, Update, Snapshot, Browse, and Report that define SQL Server behavior for a NAV Report or XMLport transaction from the beginning of the transaction. BLOB: This can contain either specially formatted text, a graphic in the form of a bitmap, or other developer-defined binary data up to 2 GB in size. The term Binary Large Object (BLOB). BLOBs can only be included in tables and not used to define working storage Variables. Refer to Developer and IT Pro Help for additional information. BigText: This can contain large chunks of text up to 2 GB in size. BigText variables can only be defined in the working storage within an object, but they cannot be included in tables. BigText variables cannot be directly displayed or seen in the debugger. There is a group of special functions that can be used to handle BigText data. Refer to Developer and IT Pro Help for additional information. To handle text strings in a single data element that are greater than 250 characters in length, use a combination of BLOB and BigText variables. GUID: This is used to assign a unique identifying number to any database object. Globally Unique Identifier (GUID), a 16-byte binary data type that is used for unique global identification of records, objects, and so on. GUID is generated by an algorithm developed by Microsoft. TestPage: This is used to store a test page, which is a logical representation of a page that does not display a user interface. Test pages are used when you do NAV application testing using the automated testing facility that is part of NAV. Data type usage About forty percent of the data types can be used to define the data either stored in tables or in working storage data definitions (that is, in a Global or Local data definition within an object). Two data types, BLOB and TableFilter, can only be used to define table-stored data, but not working storage data. About sixty percent of the data types can only be used for working storage data definitions. The following list shows which data types can be used for table (persisted) data fields and which ones can be used for working storage (variable) data: FieldClass property options Almost all data fields have a FieldClass property. FieldClass has as much effect on the content and usage of a data field as the data type; in some instances, it has more effect. Now we'll discuss the FieldClass property options now. FieldClass – Normal When the FieldClass is Normal, the field will contain the type of application data that's typically stored in a table—the contents we would expect based on the data type and various properties. FieldClass – FlowField FlowFields must be dynamically calculated. FlowFields are virtual fields stored as metadata; they do not contain data in the conventional sense. A FlowField contains the definition of how to calculate (at runtime) the data that the field represents and a place to store the result of that calculation. Generally, the Editable property for a FlowField is set to No.. Depending on the CalcFormula method, this could be a value, a reference lookup, or a Boolean. When the CalcFormula method is Sum, the FieldClass connects a data field to a previously defined SumIndexField in the table defined in the CalcFormula. The FlowField processing speed will be significantly affected by the key configuration of the table being processed. While we must be careful not to define extra keys, having the right keys defined will have a major effect on system performance and thus, on user satisfaction. A FlowField value is always 0, blank, or false, unless it has been calculated. If a FlowField is displayed directly on a page, it is calculated automatically when the page is rendered. FlowFields are also automatically calculated when they are the subject of predefined filters as part of the properties of a data item in an object. In all other cases, a FlowField must be forced to calculate using the C/AL RecordName.CALCFIELDS(FlowField1, [FlowField2],...) function or by the use of the SETAUTOCALCFIELDS function. This is also true if the underlying data is changed after the initial display of a page (that is, the FlowField must be recalculated to take a data change into account). Because a FlowField does not contain actual data, it cannot be used as a field in a key. In other words, we cannot include a FlowField as part of a key. In addition, we cannot define a FlowField that is based on another FlowField, except in special circumstances. When a field has its FieldClass set to FlowField, another directly associated property becomes available—CalcFormula. (Conversely, the AltSearchField, AutoIncrement, and TestTableRelation properties disappear from view when FieldClass is set to FlowField). The CalcFormula method is the place where we can define the formula for calculating the FlowField. On the CalcFormula property line, there is an ellipsis button. Clicking on that button will bring up the following screen: Click on the drop-down button to show the seven FlowField methods: The seven FlowFields are described in the following table: FlowField Method Field data type   Calculated value as it applies to the specified set of data within a specific column (field) in a table   Sum Decimal The sum total Average Decimal The average value (the sum divided by the row count) Exist Boolean Yes or No / True or False - does an entry exist? Count Integer The number of entries that exist Min Any The smallest value of any entry Max Any The largest value of any entry Lookup Any The value of the specified entry The Reverse Sign control allows us to change the displayed sign of the result for FlowField types Sum and Average only; the underlying data is not changed. If a Reverse Sign is used with the FlowField type Exists, it changes the effective function to does not Exist. Table and Field allow us to define the Table and the Field within that table to which our Calculation Formula will apply. When we make the entries in our Calculation Formula screen, no validation checking is done by the compiler to check whether we have chosen an eligible table and field combination. This checking doesn't occur until runtime. Therefore, when we create a new FlowField, we should test it as soon as we have defined it. The last, but by no means the least significant component of the FlowField calculation formula is the Table Filter. When we click on the ellipsis in the table filter field, the window shown in the following screenshot will appear: When we click on the Field column, we will be invited to select a field from the table that was entered into the Table field earlier. The Type field choice will determine the type of filter. The Value field will have the filter rules defined on this line, which must be consistent with the Type choices described in the following table: Filter type Value Filtering action OnlyMax- Limit Values- Filter Const   A constant which will be defined in the Value field This uses the constant to filter for equally valued entries     Filter   A filter that will be spelled out as a literal in the Value field This applies the filter expression from the Value field     Field   A field from the table within which the FlowField exists This uses the contents of the specified field to filter equally valued entries False False     If the specified field is a FlowFilter and the OnlyMaxLimit parameter is True, then the FlowFilter range will be applied on the basis of only having a MaxLimit, that is, having no bottom limit. This is useful for the date filters for the Balance Sheet data. (Refer to Balance at Date field in the G/L Account table for an example) True False     This causes the contents of the specified field to be interpreted as a filter (See Balance at Date field in the G/L Account table for an example) True or False True FieldClass – FlowFilter FlowFilters control the calculation of FlowFields in the table (when the FlowFilters are included in the CalcFormula). FlowFilters do not contain permanent data, but instead, they contain filters on a per-user basis, with the information stored in that user's instance of the code that is being executed. A FlowFilter field allows a filter to be entered at a parent record level by the user (for example, G/L Account) and applied (through the use of FlowField formulas, for example) to constrain what child data (for example, G/L Entry records) is selected. A FlowFilter allows us to provide flexible data selection functions to the users. The user does not need to have a full understanding of the data structure to apply filtering in intuitive ways to both the primary data table and the subordinate data. Based on our C/AL code design, FlowFilters can be used to apply filtering on multiple tables that are subordinate to a parent table. Of course, it is our responsibility as developers to make good use of this tool. As with many C/AL capabilities, a good way to learn more is by studying standard code designed by the Microsoft developers of NAV and then experimenting. A number of good examples on the use of FlowFilters can be found in the Customer (Table 18) and Item (Table 27) tables. In the Customer table, some of the FlowFields using FlowFilters are Balance, Balance (LCY), Net Change, Net Change (LCY), Sales (LCY), and Profit (LCY) where LCY stands for local currency. The Sales (LCY) FlowField FlowFilter usage is shown in the following screenshot: Similarly constructed FlowFields using FlowFilters in the Item table include Inventory, Net Invoiced Qty., Net Change, Purchases (Qty.) as well as other fields. Throughout the standard code, there are FlowFilters in most of the master table definitions; there are the Date Filters and Global Dimension Filters (global dimensions are user-defined codes that facilitate the segregation of accounting data by groupings such as divisions, departments, projects, customer type, and so on). Other FlowFilters that are widely used in the standard code related to Inventory activity such as Location Filter, Lot No. Filter, Serial No. Filter, and Bin Filter. The following pair of images shows two fields from the Customer table, both with a Data Type of Date. On the left side of the screenshot is the Last Date Modified field (FieldClass of Normal) and on the right side of the screenshot is the Date Filter field (FieldClass of FlowFilter). It's easy to see that the properties of the two fields are very similar, except for the properties that differ because one is a Normal field and the other is a FlowFilter field. Summary In this article, we focused on the basic building blocks of the NAV data structure: fields and their attributes. We reviewed the types of data fields, properties, and trigger elements for each type of field. We walked through a number of examples to illustrate most of these elements though we had postponed the exploration of triggers until later, when we had more knowledge of C/AL. We covered Data Type and FieldClass, properties which determine what kind of data can be stored in a field. Resources for Article: Further resources on this subject: Customization in Microsoft Dynamics CRM [article] What is BI and What are BI Tools for Microsoft Dynamics GP? [article] Learning MS Dynamics AX 2012 Programming [article]
Read more
  • 0
  • 0
  • 8059

article-image-updating-and-building-our-masters
Packt
10 Aug 2015
20 min read
Save for later

Updating and building our masters

Packt
10 Aug 2015
20 min read
In this article by John Henry Krahenbuhl, the author of the book, Axure Prototyping Blueprints, we determine that with modification, we can use all of the masters from the previous community site. To support our new use cases, we need additional registration variables, a master to support user registration, and interactions for the creation of, and to comment on, posts. Next we will create global variables and add new masters, as well as enhance the design and interactions for each master. (For more resources related to this topic, see here.) Creating additional global variables Based on project requirements, we identified that nine global variables will be required. To create global variables, on the main menu click on Project and then click on Global Variables…. In the Global Variables dialog, perform the following steps: Click the green + sign and type Email. Click on the Default Value field and type songwriter@test.com. Repeat step 1 eight more times to create additional variables using the following table for the Variable Name and Default Value fields: Variable Name Default Value Password Grammy UserEmail   UserPassword   LoggedIn No TopicIndex 0 UserText   NewPostTopic   NewPostHeadline   Click on OK. With our global variables created, we are now ready to create new masters, as well as update the design and interactions for existing masters. We will start by adding masters to the Masters pane. Adding masters to the Masters pane We will add a total of two masters to the Masters pane. To create our masters, perform the following steps: In the Masters pane, click on the, Add Master icon ,type PostCommentary and press Enter. Again, in the Masters pane, click on the Add Master icon , type NewPost and press Enter. In the same Masters pane, right-click on the icon next to the Header master, mouse over Drop Behavior and click on Lock to Master Location. We are now ready to remodel the existing masters and complete the design and interactions for our new masters. We will start with the Header master. Enhancing our Header master Once completed, the Header master will look as follows: To update the Header master, we will add an ErrorMessage label, delete the Search widgets, and update the menu items. To update widgets on the Header master, perform the following steps: In the Masters pane, double-click on the icon  next to the Header master to open in the design area. In the Widgets pane, drag the Label widget  and place it at coordinates (730,0). Now, select the Text Field widget and type Your email or password is incorrect.. In the Widget Interactions and Notes pane, click in the Shape Name field and type ErrorMessage. In the Widget Properties and Style pane, with the Style tab selected, scroll to Font and perform the following steps: Change the font size to 8. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter FF0000. In the toolbar, click on the checkbox next to Hidden. Click on the EmailTextField at coordinates (730,10). If text is displayed on the text field, right-click and click Edit Text. All text on the widget will be highlighted, click on Delete. In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Next to Hint Text, enter Email. Click Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. Click on the PasswordTextField at coordinates (815,10). If text is displayed on the text field, right-click and click on Edit Text. All text on the widget will be highlighted, press Delete. In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Click on the drop-down menu next to Type and select Password. Next to Hint Text, enter Password. Click on Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. Click on the SearchTextField at coordinates (730,82) and then on Delete. Click on the SearchButton at coordinates (890,80) and then on Delete. Next, we will convert all the Log In widgets into a dynamic panel named LoginDP. The LoginDP will allow us to transition between states and show different content when a user logs in. To create the LoginDP, in our header, select the following widgets: Named Widget Coordinates ErrorMessage (730,0) EmailTextField (730,10) PasswordTextField (815,10) LogInButton (894,10) NewUserLink (730,30) ForgotLink (815,30) With the preceding six widgets selected, right-click and click Convert to Dynamic Panel. In the Widget Interactions and Notes pane, click on the Dynamic Panel Name field and type LogInDP. All the Log In widgets are now on State1 of the LogInDP. We will now add widgets to State2 for the LogInDP. With the Log In widgets converted into the LogInDP, we will now add and design State2. In the Widget Manager pane, under the LogInDP, right-click on State1, and in the menu, click on Add State. Click on the State icon beside  State2 twice, to open in the design area. Perform the following steps: In the Widgets pane, drag the Label widget  and place it at coordinates (0,13) and do the these steps: Type Welcome, email@test.com. In the Widget Interactions and Notes pane, click in the Shape Name field and type WelcomeLabel. In the Widget Properties and Style pane, with the Style tab selected scroll to Font, change the font size to 9, and click on the Italic icon . In the Widgets pane, drag the Button Shape widget  and place it at coordinates (164,10). Type Log Out. In the toolbar, change w: to 56 and h: to 16. In the Widget Interactions and Notes pane, click on the Shape Name field and type LogOutButton. To complete the design of the Header master, we need to rename the menu items on the HzMenu. In the Masters pane, double-click on the Header master to open in the design area. Click on the HzMenu at coordinates (250,80). Perform the following steps: Click on the first menu item and type Random Musings. In the Widget Interactions and Notes pane, click on the Menu Item Name field and type RandomMusingsMenuItem. Click on Case 1 under the OnClick event and press the Delete key. Click on Create Link…. In the pop-up sitemap, click on Random Musings. Again, click on the first menu item and type Accolades and News. In the Widget Interactions and Notes pane, click on the Menu Item Name field and type AccoladesMenuItem. Click on Case 1 under the OnClick event and press the Delete key. Click on Create Link…. In the pop-up sitemap, click on Accolades and News. Click on the first menu item and type About. In the Widget Interactions and Notes pane, click on the Menu Item Name field and type AboutMenuItem. Click on Case 1 under the OnClick event and press the Delete key. Click on Create Link…. In the pop-up sitemap, click on About. We will now create a registration lightbox that will be shown when the user clicks on the NewUserLink. To display a dynamic panel in a lightbox, we will use the OnShow action with the option treat as lightbox set. We will use the Registration dynamic panel's Pin to Browser property to have the dynamic panel shown in the center and middle of the window. Learn more at http://www.axure.com/learn/dynamic-panels/basic/lightbox-tutorial. In the Masters pane, double-click on the icon  next to the Header master to open in the design area. In the Widgets pane, drag the Dynamic Panel widget  and place it at coordinates (310,200). In the toolbar, change w: to 250, h: to 250, and click on the Hidden checkbox. In the Widget Interactions and Notes pane, click on the Dynamic Panel Name field and type RegistrationLightBoxDP. In the Widget Manager pane with the Properties tab selected, click on Pin to Browser. In the Pin to Browser dialog box, click on the checkbox next to Pin to browser window and click on OK. In the Widget Manager pane, under the RegistrationLightBoxDP, click on the State icon  beside State1 twice to open in the design area. In the Widgets pane, drag the Rectangle widget  and place it at coordinates (0,0). In the Widget Interactions and Notes pane, click on the Shape Name field and type BackgroundRectangle. In the toolbar, change w: to 250 and h: to 250. Again in the Widgets pane, drag the Heading2 widget  and place it at coordinates (25,20). With the Heading2 widget selected, type Registration. In the toolbar, change w: to 141 and h: to 28. In the Widget Interactions and Notes pane, click on the Shape Name field and type RegistrationHeading. Repeat steps 8-10 to complete the design of the RegistrationLightBoxDP using the following table (* if applicable): Widget Coordinates Text* (Shown on Widget) Width* (w:) Height* (h:) Name field (In the Widget Interactions and Notes pane)   Label (25,67) Enter Email     EnterEmailLabel   Text Field (25,86)       EnterEmailField   Label (25,121) Enter Password     EnterPasswordLabel   Text Field (25,140)       EnterPasswordField   Button Shape (25,190) Submit 200 30 SubmitButton Click on the EnterEmailField text field at coordinates (25,86). In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Next to Hint Text, enter Email. Click on Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. Click on the EnterPasswordField text field at coordinates (25,140). In the Widget Properties and Style pane, with the Properties tab selected, scroll to Text Field and perform the following steps: Click on the drop-down menu next to Type and select Password. Next to Hint Text, enter Password. Click on Hint Style. In the Set Interaction Styles dialog box, click on the checkbox next to Font Color. Click on the down arrow next to the Text Color icon . In the drop-down menu, in the # text field, enter 999999. Click on OK. With the updates completed for the Header master, we are now ready to define the interactions. Refining the interactions for our Header master We will need to add additional interactions for Log In and Registration on our Header master. Interactions with our Header master will be triggered by the following named widgets and events: Dynamic Panel State Widget Event LoginDP State1 LoginButton OnClick LoginDP State1 NewUserLink OnClick LoginDP State1 ForgotLink OnClick LoginDP State2 LogOutButton OnClick RegistrationLightBoxDP State1 SubmitButton OnClick We will now define the interactions for each widget, starting with LoginButton. Defining interactions for the LoginButton When the LoginButton is clicked, the OnClick event will evaluate if the text entered in the EmailTextField and PasswordTextField equals the e-mail and password variable values. If the variables are valid, LoginDP will be set to State2 and text on the WelcomeLabel will be updated. If the variables values are not equal, we will show an error message. We will define these actions by creating two cases: ValidateUser and ShowErrorMessage. Validating the user's email and password To define the ValidateUser case for the OnClick interaction, open the LogInDP State1 in the design area. Click on the LogInButton at coordinates (164,10). In the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ValidateUser. In the Case Editor dialog, perform the following steps: You will see the Condition Builder window similar to the one shown in the following screenshot after the first and second conditions are defined: Create the first condition. Click on the Add Condition button. In the Condition Builder dialog box, in the outlined condition box, perform the following steps: In the first dropdown, select text on widget. In the second dropdown, select EmailTextField. In the third dropdown, select equals. In the fourth dropdown, select value. In the fifth dropdown, select [[Email]]. Click the green + sign. Create the second condition. Click on the Add Condition button. In the Condition Builder dialog box, in the outlined condition box, perform the following steps: In the first dropdown, select text on widget. In the second dropdown, select PasswordTextField. In the third dropdown, select equals. In the fourth dropdown, select value. In the fifth dropdown, select [[Password]]. Click on OK. Once the following three actions are defined, you should see the Case Editor similar to the one shown in the following screenshot: Create the first action. To set panel state for the LogInDP dynamic panel, perform the following steps: Under Click to add actions, scroll to the Dynamic Panels drop-down menu and click on Set Panel State. Under Configure actions, click on the checkbox next to LoginDP. Next to Select the state, click on the dropdown and select State2. Create the second action. To set text for the WelcomeLabel, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click the checkbox next to WelcomeLabel. Under Set text to, click on the dropdown and select value. In the text field, enter Welcome, [[Email]]. Create the third action. To set value of the LoggedIn variable, perform the following steps: Under Click to add actions, scroll to the Variables drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to LoggedIn. Under Set variable to, click on the first dropdown and click on value. In the text field, enter [[Email]]. Click on OK. With the ValidateUser case completed, next we will create the ShowErrorMessage case. Creating the ShowErrorMessage case To create the ShowErrorMessage case, in the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowErrorMessage. Create the action. To show the ErrorMessage label, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown and click on Show. Under Configure actions, under LoginDP dynamic panel, click on the checkbox next to ErrorMessage. Click on OK. Next, we will enable the interaction for the NewUserLink. Enabling interaction for the NewUserLink When the NewUserLink is clicked, the OnClick event will show the RegistrationLightBox dynamic panel as a lightbox, as shown in the following screenshot: With the LogInDP State1 still opened in the design area, click on the NewUserLink at coordinates (0,30). To enable the OnClick event in the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowLightBox. Now, create the action; to show the RegistrationLightBox, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown, and click on Show. Under Configure actions, click on the checkbox next to RegistrationLightBoxDP. Next go to More options, click on the dropdown and select treat as lightbox. Click on OK. Next, we will activate interactions for the ForgotLink. Activating interactions for the ForgotLink When the ForgotLink is clicked, the OnClick event will show the RegistrationLightBox dynamic panel as a lightbox, the RegistrationHeading text will be updated to display Forgot Password? and the EnterPassworldLabel, as well as the EnterPasswordField, will be hidden. To enable the OnClick event, in the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowForgotLB. In the Case Editor dialog, perform the following steps: Create the first action; to show the RegistrationLightBox, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown and click on Show. Under Configure actions, click on the checkbox next to RegistrationLightBoxDP. Next, go to More options, click on the dropdown and select treat as lightbox. Create the second action; to set text for the RegistrationHeading, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click on the checkbox next to RegistrationHeading. Under Set text to, click on the dropdown and select value. In the text field, enter Forgot Password?. Create the third action; to hide the EnterPasswordLabel and EnterPasswordField, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown, and click on Hide. Under Configure actions, under RegistrationLightBoxDP, click on the checkboxes next to EnterPasswordLabel and EnterPasswordField. Click on OK. We have now completed the interactions for State1 of LoginDP. Next, we will facilitate interactions for the LogOutButton. Facilitating interactions for the LogOutButton When the LogOutButton is clicked, the OnClick event will perform the following actions: Hide the ErrorMessage on the LoginDP State1 Set text for PasswordTextField and EmailTextField Set panel state for LoginDP to State1 Set variable value for LoggedIn To enable the OnClick event, open the LogInDP State2 in the design area. Click on the LogInOut at coordinates (164,10). In the Widget Interactions and Notes pane, with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type LogOut. In the Case Editor dialog, perform the following steps: Create the first action; to hide the ErrorMessage, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown, and click on Hide. Under Configure actions, under LoginDP, click on the checkbox next to ErrorMessage. Create the second action; to set text for the PasswordTextField and EmailTextField, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click the checkbox next to PasswordTextField. Under Set text to, click the dropdown and select value. In the text field, clear any text shown. Under Configure actions, click the checkbox next to EmailTextField. Under Set text to, click on the dropdown and select value. In the text field, enter Email. Create the third action; to set panel state for the LogInDP dynamic panel, perform the following steps: Under Click to add actions, scroll to the Dynamic Panels drop-down menu and click on Set Panel State. Under Configure actions, click on the checkbox next to LoginDP. Next to Select the state, click on the dropdown and select State1. Create the fourth action. To set variable value of LoggedIn, perform the following steps: Under Click to add actions, scroll to the Variables drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to LoggedIn. Under Set variable to, click on the first dropdown and click on value. In the text field, enter No. Click on OK. We have now completed interactions for State2 of the LoginDP. Next, we will construct interactions for the RegistrationLightBoxDP. Constructing interactions for the RegistrationLightBoxDP When the LoginButton is clicked, the OnClick event hides RegistrationLightBoxDp and sets the Email and Password variable values to the text entered in the EnterEmailField and EnterPasswordField. Also, if text on the RegistrationHeading label is equal to Registration, LoginDP will be set to State2. We will define these actions by creating two cases: UpdateVariables and ShowLogInState. Updating Variables and hiding the RegistrationLightBoxDP In the Widget Manger pane, double-click on the RegistrationLightBoxDP State1 to open in the design area. To define the UpdateVariables case for the OnClick interaction, click on the SubmitButton at coordinates (25,190). In the Widget Interactions and Notes pane with the Interactions tab selected, click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type UpdateVariables. In the Case Editor dialog, perform the following steps: The following screenshot shows Case Editor with the actions defined: Create the first action; to set variable value for the Email and Password variables, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to Email. Under Set variable to, click on the first dropdown and select text on widget. Click on the second dropdown and select EnterEmailField. Under Configure actions, click on the checkbox next to Password. Under Set variable to, click on the first dropdown and select text on widget. Click on the second dropdown and select EnterPasswordField. Create the second action; to hide RegistrationLightBoxDP, perform the following steps: Under Click to add actions, scroll to the Widgets dropdown, click on the Show/Hide dropdown and click on Hide. Under Configure actions, click on the checkbox next to RegistrationLightBoxDP. Click on OK. With the UpdateVariables case completed, next we will create the ShowLogInState case. Creating the ShowLoginState case To create the ShowLogInState case, in the Widget Interactions and Notes pane with the Interactions tab selected click on Add Case…. A Case Editor dialog box will open. In the Case Name field, type ShowLogInState. In the Case Editor dialog, perform the following steps: Click on the Add Condition button to create the first condition. In the Condition Builder dialog box, go to the outlined condition box and perform the following steps: In the first dropdown, select text on widget. In the second dropdown, select RegistrationHeadline. In the third dropdown, select equals. In the fourth dropdown, select value. In the fifth dropdown, select Registration. Click on OK. Create the first action; to set text for the WelcomeLabel, perform the following steps: Under Click to add actions, scroll to the Widgets drop-down menu and click on Set Text. Under Configure actions, click on the checkbox next to WelcomeLabel. Under Set text to, click on the dropdown and select value. In the text field, enter Welcome, [[Email]]. Click on OK. Create the second action; to set panel state for the LogInDP dynamic panel, perform the following steps: Under Click to add actions, scroll to the Dynamic Panels drop-down menu and click on Set Panel State. Under Configure actions, click on the checkbox next to LoginDP. Next to Select the state, click on the dropdown and select State2. Create the third action; to set value of the LoggedIn variable, perform the following steps: Under Click to add actions, scroll to the Variables drop-down menu and click on Set Variable Value. Under Configure actions, click on the checkbox next to LoggedIn. Under Set variable to, click on the first dropdown and click on value. In the text field, enter [[Email]]. Click on OK. Under the OnClick event, right-click on the ShowErrorMessage case and click on Toggle IF/ELSE IF. With our Header master updated, we are now ready to refresh data for our Forum repeater. Summary We learned how to leverage masters and pages from our community site to create a new blog site. We enhanced the Header master and refined the interactions for our Header master. Resources for Article: Further resources on this subject: Home Page Structure [article] Axure RP 6 Prototyping Essentials: Advanced Interactions [article] Common design patterns and how to prototype them [article]
Read more
  • 0
  • 0
  • 9264

article-image-sending-and-syncing-data
Packt
10 Aug 2015
4 min read
Save for later

Sending and Syncing Data

Packt
10 Aug 2015
4 min read
This article, by Steven F. Daniel, author of the book, Android Wearable Programming, will provide you with the background and understanding of how you can effectively build applications that communicate between the Android handheld device and the Android wearable. Android Wear comes with a number of APIs that will help to make communicating between the handheld and the wearable a breeze. We will be learning the differences between using MessageAPI, which is sometimes referred to as a "fire and forget" type of message, and DataLayerAPI that supports syncing of data between a handheld and a wearable, and NodeAPI that handles events related to each of the local and connected device nodes. (For more resources related to this topic, see here.) Creating a wearable send and receive application In this section, we will take a look at how to create an Android wearable application that will send an image and a message, and display this on our wearable device. In the next sections, we will take a look at the steps required to send data to the Android wearable using DataAPI, NodeAPI, and MessageAPIs. Firstly, create a new project in Android Studio by following these simple steps: Launch Android Studio, and then click on the File | New Project menu option. Next, enter SendReceiveData for the Application name field. Then, provide the name for the Company Domain field. Now, choose Project location and select where you would like to save your application code: Click on the Next button to proceed to the next step. Next, we will need to specify the form factors for our phone/tablet and Android Wear devices using which our application will run. On this screen, we will need to choose the minimum SDK version for our phone/tablet and Android Wear. Click on the Phone and Tablet option and choose API 19: Android 4.4 (KitKat) for Minimum SDK. Click on the Wear option and choose API 21: Android 5.0 (Lollipop) for Minimum SDK: Click on the Next button to proceed to the next step. In our next step, we will need to add Blank Activity to our application project for the mobile section of our app. From the Add an activity to Mobile screen, choose the Add Blank Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Activity so that it can be used by our application. Here we will need to specify the name of our activity, layout information, title, and menu resource file. From the Customize the Activity screen, enter MobileActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard: In the next step, we will need to add Blank Activity to our application project for the Android wearable section of our app. From the Add an activity to Wear screen, choose the Blank Wear Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Wear Activity so that our Android wearable can use it. Here we will need to specify the name of our activity and the layout information. From the Customize the Activity screen, enter WearActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard:   Finally, click on the Finish button and the wizard will generate your project and after a few moments, the Android Studio window will appear with your project displayed. Summary In this article, we learned about three new APIs, DataAPI, NodeAPI, and MessageAPIs, and how we can use them and their associated methods to transmit information between the handheld mobile and the wearable. If, for whatever reason, the connected wearable node gets disconnected from the paired handheld device, the DataApi class is smart enough to try sending again automatically once the connection is reestablished. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 7490

article-image-understanding-hadoop-backup-and-recovery-needs
Packt
10 Aug 2015
25 min read
Save for later

Understanding Hadoop Backup and Recovery Needs

Packt
10 Aug 2015
25 min read
In this article by Gaurav Barot, Chintan Mehta, and Amij Patel, authors of the book Hadoop Backup and Recovery Solutions, we will discuss backup and recovery needs. In the present age of information explosion, data is the backbone of business organizations of all sizes. We need a complete data backup and recovery system and a strategy to ensure that critical data is available and accessible when the organizations need it. Data must be protected against loss, damage, theft, and unauthorized changes. If disaster strikes, data recovery must be swift and smooth so that business does not get impacted. Every organization has its own data backup and recovery needs, and priorities based on the applications and systems they are using. Today's IT organizations face the challenge of implementing reliable backup and recovery solutions in the most efficient, cost-effective manner. To meet this challenge, we need to carefully define our business requirements and recovery objectives before deciding on the right backup and recovery strategies or technologies to deploy. (For more resources related to this topic, see here.) Before jumping onto the implementation approach, we first need to know about the backup and recovery strategies and how to efficiently plan them. Understanding the backup and recovery philosophies Backup and recovery is becoming more challenging and complicated, especially with the explosion of data growth and increasing need for data security today. Imagine big players such as Facebook, Yahoo! (the first to implement Hadoop), eBay, and more; how challenging it will be for them to handle unprecedented volumes and velocities of unstructured data, something which traditional relational databases can't handle and deliver. To emphasize the importance of backup, let's take a look at a study conducted in 2009. This was the time when Hadoop was evolving and a handful of bugs still existed in Hadoop. Yahoo! had about 20,000 nodes running Apache Hadoop in 10 different clusters. HDFS lost only 650 blocks, out of 329 million total blocks. Now hold on a second. These blocks were lost due to the bugs found in the Hadoop package. So, imagine what the scenario would be now. I am sure you will bet on losing hardly a block. Being a backup manager, your utmost target is to think, make, strategize, and execute a foolproof backup strategy capable of retrieving data after any disaster. Solely speaking, the plan of the strategy is to protect the files in HDFS against disastrous situations and revamp the files back to their normal state, just like James Bond resurrects after so many blows and probably death-like situations. Coming back to the backup manager's role, the following are the activities of this role: Testing out various case scenarios to forestall any threats, if any, in the future Building a stable recovery point and setup for backup and recovery situations Preplanning and daily organization of the backup schedule Constantly supervising the backup and recovery process and avoiding threats, if any Repairing and constructing solutions for backup processes The ability to reheal, that is, recover from data threats, if they arise (the resurrection power) Data protection is one of the activities and it includes the tasks of maintaining data replicas for long-term storage Resettling data from one destination to another Basically, backup and recovery strategies should cover all the areas mentioned here. For any system data, application, or configuration, transaction logs are mission critical, though it depends on the datasets, configurations, and applications that are used to design the backup and recovery strategies. Hadoop is all about big data processing. After gathering some exabytes for data processing, the following are the obvious questions that we may come up with: What's the best way to back up data? Do we really need to take a backup of these large chunks of data? Where will we find more storage space if the current storage space runs out? Will we have to maintain distributed systems? What if our backup storage unit gets corrupted? The answer to the preceding questions depends on the situation you may be facing; let's see a few situations. One of the situations is where you may be dealing with a plethora of data. Hadoop is used for fact-finding semantics and data is in abundance. Here, the span of data is short; it is short lived and important sources of the data are already backed up. Such is the scenario wherein the policy of not backing up data at all is feasible, as there are already three copies (replicas) in our data nodes (HDFS). Moreover, since Hadoop is still vulnerable to human error, a backup of configuration files and NameNode metadata (dfs.name.dir) should be created. You may find yourself facing a situation where the data center on which Hadoop runs crashes and the data is not available as of now; this results in a failure to connect with mission-critical data. A possible solution here is to back up Hadoop, like any other cluster (the Hadoop command is Hadoop). Replication of data using DistCp To replicate data, the distcp command writes data to two different clusters. Let's look at the distcp command with a few examples or options. DistCp is a handy tool used for large inter/intra cluster copying. It basically expands a list of files to input in order to map tasks, each of which will copy files that are specified in the source list. Let's understand how to use distcp with some of the basic examples. The most common use case of distcp is intercluster copying. Let's see an example: bash$ hadoop distcp2 hdfs://ka-16:8020/parth/ghiya hdfs://ka-001:8020/knowarth/parth This command will expand the namespace under /parth/ghiya on the ka-16 NameNode into the temporary file, get its content, divide them among a set of map tasks, and start copying the process on each task tracker from ka-16 to ka-001. The command used for copying can be generalized as follows: hadoop distcp2 hftp://namenode-location:50070/basePath hdfs://namenode-location Here, hftp://namenode-location:50070/basePath is the source and hdfs://namenode-location is the destination. In the preceding command, namenode-location refers to the hostname and 50070 is the NameNode's HTTP server post. Updating and overwriting using DistCp The -update option is used when we want to copy files from the source that don't exist on the target or have some different contents, which we do not want to erase. The -overwrite option overwrites the target files even if they exist at the source. The files can be invoked by simply adding -update and -overwrite. In the example, we used distcp2, which is an advanced version of DistCp. The process will go smoothly even if we use the distcp command. Now, let's look at two versions of DistCp, the legacy DistCp or just DistCp and the new DistCp or the DistCp2: During the intercluster copy process, files that were skipped during the copy process have all their file attributes (permissions, owner group information, and so on) unchanged when we copy using legacy DistCp or just DistCp. This, however, is not the case in new DistCp. These values are now updated even if a file is skipped. Empty root directories among the source inputs were not created in the target folder in legacy DistCp, which is not the case anymore in the new DistCp. There is a common misconception that Hadoop protects data loss; therefore, we don't need to back up the data in the Hadoop cluster. Since Hadoop replicates data three times by default, this sounds like a safe statement; however, it is not 100 percent safe. While Hadoop protects from hardware failure on the data nodes—meaning that if one entire node goes down, you will not lose any data—there are other ways in which data loss may occur. Data loss may occur due to various reasons, such as Hadoop being highly susceptible to human errors, corrupted data writes, accidental deletions, rack failures, and many such instances. Any of these reasons are likely to cause data loss. Consider an example where a corrupt application can destroy all data replications. During the process, it will attempt to compute each replication and on not finding a possible match, it will delete the replica. User deletions are another example of how data can be lost, as Hadoop's trash mechanism is not enabled by default. Also, one of the most complicated and expensive-to-implement aspects of protecting data in Hadoop is the disaster recovery plan. There are many different approaches to this, and determining which approach is right requires a balance between cost, complexity, and recovery time. A real-life scenario can be Facebook. The data that Facebook holds increases exponentially from 15 TB to 30 PB, that is, 3,000 times the Library of Congress. With increasing data, the problem faced was physical movement of the machines to the new data center, which required man power. Plus, it also impacted services for a period of time. Data availability in a short period of time is a requirement for any service; that's when Facebook started exploring Hadoop. To conquer the problem while dealing with such large repositories of data is yet another headache. The reason why Hadoop was invented was to keep the data bound to neighborhoods on commodity servers and reasonable local storage, and to provide maximum availability to data within the neighborhood. So, a data plan is incomplete without data backup and recovery planning. A big data execution using Hadoop states a situation wherein the focus on the potential to recover from a crisis is mandatory. The backup philosophy We need to determine whether Hadoop, the processes and applications that run on top of it (Pig, Hive, HDFS, and more), and specifically the data stored in HDFS are mission critical. If the data center where Hadoop is running disappeared, will the business stop? Some of the key points that have to be taken into consideration have been explained in the sections that follow; by combining these points, we will arrive at the core of the backup philosophy. Changes since the last backup Considering the backup philosophy that we need to construct, the first thing we are going to look at are changes. We have a sound application running and then we add some changes. In case our system crashes and we need to go back to our last safe state, our backup strategy should have a clause of the changes that have been made. These changes can be either database changes or configuration changes. Our clause should include the following points in order to construct a sound backup strategy: Changes we made since our last backup The count of files changed Ensure that our changes are tracked The possibility of bugs in user applications since the last change implemented, which may cause hindrance and it may be necessary to go back to the last safe state After applying new changes to the last backup, if the application doesn't work as expected, then high priority should be given to the activity of taking the application back to its last safe state or backup. This ensures that the user is not interrupted while using the application or product. The rate of new data arrival The next thing we are going to look at is how many changes we are dealing with. Is our application being updated so much that we are not able to decide what the last stable version was? Data is produced at a surpassing rate. Consider Facebook, which alone produces 250 TB of data a day. Data production occurs at an exponential rate. Soon, terms such as zettabytes will come upon a common place. Our clause should include the following points in order to construct a sound backup: The rate at which new data is arriving The need for backing up each and every change The time factor involved in backup between two changes Policies to have a reserve backup storage The size of the cluster The size of a cluster is yet another important factor, wherein we will have to select cluster size such that it will allow us to optimize the environment for our purpose with exceptional results. Recalling the Yahoo! example, Yahoo! has 10 clusters all over the world, covering 20,000 nodes. Also, Yahoo! has the maximum number of nodes in its large clusters. Our clause should include the following points in order to construct a sound backup: Selecting the right resource, which will allow us to optimize our environment. The selection of the right resources will vary as per need. Say, for instance, users with I/O-intensive workloads will go for more spindles per core. A Hadoop cluster contains four types of roles, that is, NameNode, JobTracker, TaskTracker, and DataNode. Handling the complexities of optimizing a distributed data center. Priority of the datasets The next thing we are going to look at are the new datasets, which are arriving. With the increase in the rate of new data arrivals, we always face a dilemma of what to backup. Are we tracking all the changes in the backup? Now, if are we backing up all the changes, will our performance be compromised? Our clause should include the following points in order to construct a sound backup: Making the right backup of the dataset Taking backups at a rate that will not compromise performance Selecting the datasets or parts of datasets The next thing we are going to look at is what exactly is backed up. When we deal with large chunks of data, there's always a thought in our mind: Did we miss anything while selecting the datasets or parts of datasets that have not been backed up yet? Our clause should include the following points in order to construct a sound backup: Backup of necessary configuration files Backup of files and application changes The timeliness of data backups With such a huge amount of data collected daily (Facebook), the time interval between backups is yet another important factor. Do we back up our data daily? In two days? In three days? Should we backup small chunks of data daily, or should we back up larger chunks at a later period? Our clause should include the following points in order to construct a sound backup: Dealing with any impacts if the time interval between two backups is large Monitoring a timely backup strategy and going through it The frequency of data backups depends on various aspects. Firstly, it depends on the application and usage. If it is I/O intensive, we may need more backups, as each dataset is not worth losing. If it is not so I/O intensive, we may keep the frequency low. We can determine the timeliness of data backups from the following points: The amount of data that we need to backup The rate at which new updates are coming Determining the window of possible data loss and making it as low as possible Critical datasets that need to be backed up Configuration and permission files that need to be backed up Reducing the window of possible data loss The next thing we are going to look at is how to minimize the window of possible data loss. If our backup frequency is great then what are the chances of data loss? What's our chance of recovering the latest files? Our clause should include the following points in order to construct a sound backup: The potential to recover latest files in the case of a disaster Having a low data-loss probability Backup consistency The next thing we are going to look at is backup consistency. The probability of invalid backups should be less or even better zero. This is because if invalid backups are not tracked, then copies of invalid backups will be made further, which will again disrupt our backup process. Our clause should include the following points in order to construct a sound backup: Avoid copying data when it's being changed Possibly, construct a shell script, which takes timely backups Ensure that the shell script is bug-free Avoiding invalid backups We are going to continue the discussion on invalid backups. As you saw, HDFS makes three copies of our backup for the recovery process. What if the original backup was flawed with errors or bugs? The three copies will be corrupted copies; now, when we recover these flawed copies, the result indeed will be a catastrophe. Our clause should include the following points in order to construct a sound backup: Avoid having a long backup frequency Have the right backup process, and probably having an automated shell script Track unnecessary backups If our backup clause covers all the preceding mentioned points, we surely are on the way to making a good backup strategy. A good backup policy basically covers all these points; so, if a disaster occurs, it always aims to go to the last stable state. That's all about backups. Moving on, let's say a disaster occurs and we need to go to the last stable state. Let's have a look at the recovery philosophy and all the points that make a sound recovery strategy. The recovery philosophy After a deadly storm, we always try to recover from the after-effects of the storm. Similarly, after a disaster, we try to recover from the effects of the disaster. In just one moment, storage capacity which was a boon turns into a curse and just another expensive, useless thing. Starting off with the best question, what will be the best recovery philosophy? Well, it's obvious that the best philosophy will be one wherein we may never have to perform recovery at all. Also, there may be scenarios where we may need to do a manual recovery. Let's look at the possible levels of recovery before moving on to recovery in Hadoop: Recovery to the flawless state Recovery to the last supervised state Recovery to a possible past state Recovery to a sound state Recovery to a stable state So, obviously we want our recovery state to be flawless. But if it's not achieved, we are willing to compromise a little and allow the recovery to go to a possible past state we are aware of. Now, if that's not possible, again we are ready to compromise a little and allow it to go to the last possible sound state. That's how we deal with recovery: first aim for the best, and if not, then compromise a little. Just like the saying goes, "The bigger the storm, more is the work we have to do to recover," here also we can say "The bigger the disaster, more intense is the recovery plan we have to take." So, the recovery philosophy that we construct should cover the following points: An automation system setup that detects a crash and restores the system to the last working state, where the application runs as per expected behavior. The ability to track modified files and copy them. Track the sequences on files, just like an auditor trails his audits. Merge the files that are copied separately. Multiple version copies to maintain a version control. Should be able to treat the updates without impacting the application's security and protection. Delete the original copy only after carefully inspecting the changed copy. Treat new updates but first make sure they are fully functional and will not hinder anything else. If they hinder, then there should be a clause to go to the last safe state. Coming back to recovery in Hadoop, the first question we may think of is what happens when the NameNode goes down? When the NameNode goes down, so does the metadata file (the file that stores data about file owners and file permissions, where the file is stored on data nodes and more), and there will be no one present to route our read/write file request to the data node. Our goal will be to recover the metadata file. HDFS provides an efficient way to handle name node failures. There are basically two places where we can find metadata. First, fsimage and second, the edit logs. Our clause should include the following points: Maintain three copies of the name node. When we try to recover, we get four options, namely, continue, stop, quit, and always. Choose wisely. Give preference to save the safe part of the backups. If there is an ABORT! error, save the safe state. Hadoop provides four recovery modes based on the four options it provides (continue, stop, quit, and always): Continue: This allows you to continue over the bad parts. This option will let you cross over a few stray blocks and continue over to try to produce a full recovery mode. This can be the Prompt when found error mode. Stop: This allows you to stop the recovery process and make an image file of the copy. Now, the part that we stopped won't be recovered, because we are not allowing it to. In this case, we can say that we are having the safe-recovery mode. Quit: This exits the recovery process without making a backup at all. In this, we can say that we are having the no-recovery mode. Always: This is one step further than continue. Always selects continue by default and thus avoids stray blogs found further. This can be the prompt only once mode. We will look at these in further discussions. Now, you may think that the backup and recovery philosophy is cool, but wasn't Hadoop designed to handle these failures? Well, of course, it was invented for this purpose but there's always the possibility of a mashup at some level. Are we overconfident and not ready to take precaution, which can protect us, and are we just entrusting our data blindly with Hadoop? No, certainly we aren't. We are going to take every possible preventive step from our side. In the next topic, we look at the very same topic as to why we need preventive measures to back up Hadoop. Knowing the necessity of backing up Hadoop Change is the fundamental law of nature. There may come a time when Hadoop may be upgraded on the present cluster, as we see many system upgrades everywhere. As no upgrade is bug free, there is a probability that existing applications may not work the way they used to. There may be scenarios where we don't want to lose any data, let alone start HDFS from scratch. This is a scenario where backup is useful, so a user can go back to a point in time. Looking at the HDFS replication process, the NameNode handles the client request to write a file on a DataNode. The DataNode then replicates the block and writes the block to another DataNode. This DataNode repeats the same process. Thus, we have three copies of the same block. Now, how these DataNodes are selected for placing copies of blocks is another issue, which we are going to cover later in Rack awareness. You will see how to place these copies efficiently so as to handle situations such as hardware failure. But the bottom line is when our DataNode is down there's no need to panic; we still have a copy on a different DataNode. Now, this approach gives us various advantages such as: Security: This ensures that blocks are stored on two different DataNodes High write capacity: This writes only on a single DataNode; the replication factor is handled by the DataNode Read options: This denotes better options from where to read; the NameNode maintains records of all the locations of the copies and the distance from the NameNode Block circulation: The client writes only a single block; others are handled through the replication pipeline During the write operation on a DataNode, it receives data from the client as well as passes data to the next DataNode simultaneously; thus, our performance factor is not compromised. Data never passes through the NameNode. The NameNode takes the client's request to write data on a DataNode and processes the request by deciding on the division of files into blocks and the replication factor. The following figure shows the replication pipeline, wherein a block of the file is written and three different copies are made at different DataNode locations: After hearing such a foolproof plan and seeing so many advantages, we again arrive at the same question: is there a need for backup in Hadoop? Of course there is. There often exists a common mistaken belief that Hadoop shelters you against data loss, which gives you the freedom to not take backups in your Hadoop cluster. Hadoop, by convention, has a facility to replicate your data three times by default. Although reassuring, the statement is not safe and does not guarantee foolproof protection against data loss. Hadoop gives you the power to protect your data over hardware failures; the scenario wherein one disk, cluster, node, or region may go down, data will still be preserved for you. However, there are many scenarios where data loss may occur. Consider an example where a classic human-prone error can be the storage locations that the user provides during operations in Hive. If the user provides a location wherein data already exists and they perform a query on the same table, the entire existing data will be deleted, be it of size 1 GB or 1 TB. In the following figure, the client gives a read operation but we have a faulty program. Going through the process, the NameNode is going to see its metadata file for the location of the DataNode containing the block. But when it reads from the DataNode, it's not going to match the requirements, so the NameNode will classify that block as an under replicated block and move on to the next copy of the block. Oops, again we will have the same situation. This way, all the safe copies of the block will be transferred to under replicated blocks, thereby HDFS fails and we need some other backup strategy: When copies do not match the way NameNode explains, it discards the copy and replaces it with a fresh copy that it has. HDFS replicas are not your one-stop solution for protection against data loss. The needs for recovery Now, we need to decide up to what level we want to recover. Like you saw earlier, we have four modes available, which recover either to a safe copy, the last possible state, or no copy at all. Based on your needs decided in the disaster recovery plan we defined earlier, you need to take appropriate steps based on that. We need to look at the following factors: The performance impact (is it compromised?) How large is the data footprint that my recovery method leaves? What is the application downtime? Is there just one backup or are there incremental backups? Is it easy to implement? What is the average recovery time that the method provides? Based on the preceding aspects, we will decide which modes of recovery we need to implement. The following methods are available in Hadoop: Snapshots: Snapshots simply capture a moment in time and allow you to go back to the possible recovery state. Replication: This involves copying data from one cluster and moving it to another cluster, out of the vicinity of the first cluster, so that if one cluster is faulty, it doesn't have an impact on the other. Manual recovery: Probably, the most brutal one is moving data manually from one cluster to another. Clearly, its downsides are large footprints and large application downtime. API: There's always a custom development using the public API available. We will move on to the recovery areas in Hadoop. Understanding recovery areas Recovering data after some sort of disaster needs a well-defined business disaster recovery plan. So, the first step is to decide our business requirements, which will define the need for data availability, precision in data, and requirements for the uptime and downtime of the application. Any disaster recovery policy should basically cover areas as per requirements in the disaster recovery principal. Recovery areas define those portions without which an application won't be able to come back to its normal state. If you are armed and fed with proper information, you will be able to decide the priority of which areas need to be recovered. Recovery areas cover the following core components: Datasets NameNodes Applications Database sets in HBase Let's go back to the Facebook example. Facebook uses a customized version of MySQL for its home page and other interests. But when it comes to Facebook Messenger, Facebook uses the NoSQL database provided by Hadoop. Now, looking from that point of view, Facebook will have both those things in recovery areas and will need different steps to recover each of these areas. Summary In this article, we went through the backup and recovery philosophy and what all points a good backup philosophy should have. We went through what a recovery philosophy constitutes. We saw the modes available for recovery in Hadoop. Then, we looked at why backup is important even though HDFS provides the replication process. Lastly, we looked at the recovery needs and areas. Quite a journey, wasn't it? Well, hold on tight. These are just your first steps into Hadoop User Group (HUG). Resources for Article: Further resources on this subject: Cassandra Architecture [article] Oracle GoldenGate 12c — An Overview [article] Backup and Restore Improvements [article]
Read more
  • 0
  • 0
  • 7059

article-image-using-handlebars-express
Packt
10 Aug 2015
17 min read
Save for later

Using Handlebars with Express

Packt
10 Aug 2015
17 min read
In this article written by Paul Wellens, author of the book Practical Web Development, we cover a brief description about the following topics: Templates Node.js Express 4 Templates Templates come in many shapes or forms. Traditionally, they are non-executable files with some pre-formatted text, used as the basis of a bazillion documents that can be generated with a computer program. I worked on a project where I had to write a program that would take a Microsoft Word template, containing parameters like $first, $name, $phone, and so on, and generate a specific Word document for every student in a school. Web templating does something very similar. It uses a templating processor that takes data from a source of information, typically a database and a template, a generic HTML file with some kind of parameters inside. The processor then merges the data and template to generate a bunch of static web pages or dynamically generates HTML on the fly. If you have been using PHP to create dynamic webpages, you will have been busy with web templating. Why? Because you have been inserting PHP code inside HTML files in between the <?php and ?> strings. Your templating processor was the Apache web server that has many additional roles. By the time your browser gets to see the result of your code, it is pure HTML. This makes this an example of server side templating. You could also use Ajax and PHP to transfer data in the JSON format and then have the browser process that data using JavaScript to create the HTML you need. Combine this with using templates and you will have client side templating. Node.js What Le Sacre du Printemps by Stravinsky did to the world of classical music, Node.js may have done to the world of web development. At its introduction, it shocked the world. By now, Node.js is considered by many as the coolest thing. Just like Le Sacre is a totally different kind of music—but by now every child who has seen Fantasia has heard it—Node.js is a different way of doing web development. Rather than writing an application and using a web server to soup up your code to a browser, the application and the web server are one and the same. This may sound scary, but you should not worry as there is an entire community that developed modules you can obtain using the npm tool. Before showing you an example, I need to point out an extremely important feature of Node.js: the language in which you will write your web server and application is JavaScript. So Node.js gives you server side JavaScript. Installing Node.js How to install Node.js will be different, depending on your OS, but the result is the same everywhere. It gives you two programs: Node and npm. npm The node packaging manager (npm)is the tool that you use to look for and install modules. Each time you write code that needs a module, you will have to add a line like this in: var module = require('module'); The module will have to be installed first, or the code will fail. This is how it is done: npm install module or npm -g install module The latter will attempt to install the module globally, the former, in the directory where the command is issued. It will typically install the module in a folder called node_modules. node The node program is the command to use to start your Node.js program, for example: node myprogram.js node will start and interpret your code. Type Ctrl-C to stop node. Now create a file myprogram.js containing the following text: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); So, if you installed Node.js and the required http module, typing node myprogram.js in a terminal window, your console will start up a web server. And, when you type http://localhost:8080 in a browser, you will see the world famous two word program example on your screen. This is the equivalent of getting the It works! thing, after testing your Apache web server installation. As a matter of fact, if you go to http://localhost:8080/it/does/not/matterwhat, the same will appear. Not very useful maybe, but it is a web server. Serving up static content This does not work in a way we are used to. URLs typically point to a file (or a folder, in which case the server looks for an index.html file) , foo.html, or bar.php, and when present, it is served up to the client. So what if we want to do this with Node.js? We will need a module. Several exist to do the job. We will use node-static in our example. But first we need to install it: npm install node-static In our app, we will now create not only a web server, but a fileserver as well. It will serve all the files in the local directory public. It is good to have all the so called static content together in a separate folder. These are basically all the files that will be served up to and interpreted by the client. As we will now end up with a mix of client code and server code, it is a good practice to separate them. When you use the Express framework, you have an option to have Express create these things for you. So, here is a second, more complete, Node.js example, including all its static content. hello.js, our node.js app var http = require('http'); var static = require('node-static'); var fileServer = new static.Server('./public'); http.createServer(function (req, res) { fileServer.serve(req,res); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); hello.html is stored in ./public. <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Hello world document</title> <link href="./styles/hello.css" rel="stylesheet"> </head> <body> <h1>Hello, World</h1> </body> </html> hello.css is stored in public/styles. body { background-color:#FFDEAD; } h1 { color:teal; margin-left:30px; } .bigbutton { height:40px; color: white; background-color:teal; margin-left:150px; margin-top:50px; padding:15 15 25 15; font-size:18px; } So, if we now visit http://localhost:8080/hello, we will see our, by now too familiar, Hello World message with some basic styling, proving that our file server also delivered the CSS file. You can easily take it one step further and add JavaScript and the jQuery library and put it in, for example, public/js/hello.js and public/js/jquery.js respectively. Too many notes With Node.js, you only install the modules that you need, so it does not include the kitchen sink by default! You will benefit from that for as far as performance goes. Back in California, I have been a proud product manager of a PC UNIX product, and one of our coolest value-add was a tool, called kconfig, that would allow people to customize what would be inside the UNIX kernel, so that it would only contain what was needed. This is what Node.js reminds me of. And it is written in C, as was UNIX. Deja vu. However, if we wanted our Node.js web server to do everything the Apache Web Server does, we would need a lot of modules. Our application code needs to be added to that as well. That means a lot of modules. Like the critics in the movie Amadeus said: Too many notes. Express 4 A good way to get the job done with fewer notes is by using the Express framework. On the expressjs.com website, it is called a minimal and flexible Node.js web application framework, providing a robust set of features for building web applications. This is a good way to describe what Express can do for you. It is minimal, so there is little overhead for the framework itself. It is flexible, so you can add just what you need. It gives a robust set of features, which means you do not have to create them yourselves, and they have been tested by an ever growing community. But we need to get back to templating, so all we are going to do here is explain how to get Express, and give one example. Installing Express As Express is also a node module, we install it as such. In your project directory for your application, type: npm install express You will notice that a folder called express has been created inside node_modules, and inside that one, there is another collection of node-modules. These are examples of what is called middleware. In the code example that follows, we assume app.js as the name of our JavaScript file, and app for the variable that you will use in that file for your instance of Express. This is for the sake of brevity. It would be better to use a string that matches your project name. We will now use Express to rewrite the hello.js example. All static resources in the public directory can remain untouched. The only change is in the node app itself: var express = require('express'); var path = require('path'); var app = express(); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options )); app.listen(app.get('port'), function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); This code uses so called middleware (static) that is included with express. There is a lot more available from third parties. Well, compared to our node.js example, it is about the same number of lines. But it looks a lot cleaner and it does more for us. You no longer need to explicitly include the HTTP module and other such things. Templating and Express We need to get back to templating now. Imagine all the JavaScript ecosystem we just described. Yes, we could still put our client JavaScript code in between the <script> tags but what about the server JavaScript code? There is no such thing as <?javascript ?> ! Node.js and Express, support several templating languages that allow you to separate layout and content, and which have the template system do the work of fetching the content and injecting it into the HTML. The default templating processor for Express appears to be Jade, which uses its own, albeit more compact than HTML, language. Unfortunately, that would mean that you have to learn yet another syntax to produce something. We propose to use handlebars.js. There are two reasons why we have chosen handlebars.js: It uses <html> as the language It is available on both the client and server side Getting the handlebars module for Express Several Express modules exist for handlebars. We happen to like the one with the surprising name express-handlebars. So, we install it, as follows: npm install express-handlebars Layouts I almost called this section templating without templates as our first example will not use a parameter inside the templates. Most websites will consist of several pages, either static or dynamically generated ones. All these pages usually have common parts; a header and footer part, a navigation part or menu, and so on. This is the layout of our site. What distinguishes one page from another, usually, is some part in the body of the page where the home page has different information than the other pages. With express-handlebars, you can separate layout and content. We will start with a very simple example. Inside your project folder that contains public, create a folder, views, with a subdirectory layout. Inside the layouts subfolder, create a file called main.handlebars. This is your default layout. Building on top of the previous example, have it say: <!doctype html> <html> <head> <title>Handlebars demo</title> </head> <link href="./styles/hello.css" rel="stylesheet"> <body> {{{body}}} </body> </html> Notice the {{{body}}} part. This token will be replaced by HTML. Handlebars escapes HTML. If we want our HTML to stay intact, we use {{{ }}}, instead of {{ }}. Body is a reserved word for handlebars. Create, in the folder views, a file called hello.handlebars with the following content. This will be one (of many) example of the HTML {{{body}}}, which will be replaced by: <h1>Hello, World</h1> Let’s create a few more june.handlebars with: <h1>Hello, June Lake</h1> And bodie.handlebars containing: <h1>Hello, Bodie</h1> Our first handlebars example Now, create a file, handlehello.js, in the project folder. For convenience, we will keep the relevant code of the previous Express example: var express = require('express'); var path = require('path'); var app = express(); var exphbs = require(‘express-handlebars’); app.engine('handlebars', exphbs({defaultLayout: 'main'})); app.set('view engine', 'handlebars'); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', etag: false, extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options  )); app.get('/', function(req, res) { res.render('hello');   // this is the important part }); app.get('/bodie', function(req, res) { res.render('bodie'); }); app.get('/june', function(req, res) { res.render('june'); }); app.listen(app.get('port'),  function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); Everything that worked before still works, but if you type http://localhost:3000/, you will see a page with the layout from main.handlebars and {{{body}}} replaced by, you guessed it, the same Hello World with basic markup that looks the same as our hello.html example. Let’s look at the new code. First, of course, we need to add a require statement for our express-handlebars module, giving us an instance of express-handlebars. The next two lines specify what the view engine is for this app and what the extension is that is used for the templates and layouts. We pass one option to express-handlebars, defaultLayout, setting the default layout to be main. This way, we could have different versions of our app with different layouts, for example, one using Bootstrap and another using Foundation. The res.render calls determine which views need to be rendered, so if you type http:// localhost:3000/june, you will get Hello, June Lake, rather than Hello World. But this is not at all useful, as in this implementation, you still have a separate file for each Hello flavor. Let’s create a true template instead. Templates In the views folder, create a file, town.handlebars, with the following content: {{!-- Our first template with tokens --}} <h1>Hello, {{town}} </h1> Please note the comment line. This is the syntax for a handlebars comment. You could HTML comments as well, of course, but the advantage of using handlebars comments is that it will not show up in the final output. Next, add this to your JavaScript file: app.get('/lee', function(req, res) { res.render('town', { town: "Lee Vining"}); }); Now, we have a template that we can use over and over again with different context, in this example, a different town name. All you have to do is pass a different second argument to the res.render call, and {{town}} in the template, will be replaced by the value of town in the object. In general, what is passed as the second argument is referred to as the context. Helpers The token can also be replaced by the output of a function. After all, this is JavaScript. In the context of handlebars, we call those helpers. You can write your own, or use some of the cool built-in ones, such as #if and #each. #if/else Let us update town.handlebars as follows: {{#if town}} <h1>Hello, {{town}} </h1> {{else}} <h1>Hello, World </h1> {{/if}} This should be self explanatory. If the variable town has a value, use it, if not, then show the world. Note that what comes after #if can only be something that is either true of false, zero or not. The helper does not support a construct such as #if x < y. #each A very useful built-in helper is #each, which allows you to walk through an array of things and generate HTML accordingly. This is an example of the code that could be inside your app and the template you could use in your view folder: app.js code snippet var californiapeople = {    people: [ {“name":"Adams","first":"Ansel","profession":"photographer",    "born"       :"SanFrancisco"}, {“name":"Muir","first":"John","profession":"naturalist",    "born":"Scotland"}, {“name":"Schwarzenegger","first":"Arnold",    "profession":"governator","born":"Germany"}, {“name":"Wellens","first":"Paul","profession":"author",    "born":"Belgium"} ]   }; app.get('/californiapeople', function(req, res) { res.render('californiapeople', californiapeople); }); template (californiapeople.handlebars) <table class=“cooltable”> {{#each people}}    <tr><td>{{first}}</td><td>{{name}}</td>    <td>{{profession}}</td></tr> {{/each}} </table> Now we are well on our way to do some true templating. You can also write your own helpers, which is beyond the scope of an introductory article. However, before we leave you, there is one cool feature of handlebars you need to know about: partials. Partials In web development, where you dynamically generate HTML to be part of a web page, it is often the case that you repetitively need to do the same thing, albeit on a different page. There is a cool feature in express-handlebars that allows you to do that very same thing: partials. Partials are templates you can refer to inside a template, using a special syntax and drastically shortening your code that way. The partials are stored in a separate folder. By default, that would be views/partials, but you can even use subfolders. Let's redo the previous example but with a partial. So, our template is going to be extremely petite: {{!-- people.handlebars inside views  --}}    {{> peoplepartial }} Notice the > sign; this is what indicates a partial. Now, here is the familiar looking partial template: {{!-- peoplepartialhandlebars inside views/partials --}} <h1>Famous California people </h1> <table> {{#each people}} <tr><td>{{first}}</td><td>{{name}}</td> <td>{{profession}}</td></tr> {{/each}} </table> And, following is the JavaScript code that triggers it: app.get('/people', function(req, res) { res.render('people', californiapeople); }); So, we give it the same context but the view that is rendered is ridiculously simplistic, as there is a partial underneath that will take care of everything. Of course, these were all examples to demonstrate how handlebars and Express can work together really well, nothing more than that. Summary In this article, we talked about using templates in web development. Then, we zoomed in on using Node.js and Express, and introduced Handlebars.js. Handlebars.js is cool, as it lets you separate logic from layout and you can use it server-side (which is where we focused on), as well as client-side. Moreover, you will still be able to use HTML for your views and layouts, unlike with other templating processors. For those of you new to Node.js, I compared it to what Le Sacre du Printemps was to music. To all of you, I recommend the recording by the Los Angeles Philharmonic and Esa-Pekka Salonen. I had season tickets for this guy and went to his inaugural concert with Mahler’s third symphony. PHP had not been written yet, but this particular performance I had heard on the radio while on the road in California, and it was magnificent. Check it out. And, also check out Express and handlebars. Resources for Article: Let's Build with AngularJS and Bootstrap The Bootstrap grid system MODx Web Development: Creating Lists
Read more
  • 0
  • 2
  • 38319
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-eav-model
Packt
10 Aug 2015
11 min read
Save for later

EAV model

Packt
10 Aug 2015
11 min read
In this article by Allan MacGregor, author of the book Magento PHP Developer's Guide - Second Edition, we cover details about EAV models, its usefulness in retrieving data, and the advantages it provides to the merchants and developers. EAV stands for entity, attribute, and value and is probably the most difficult concept for new Magento developers to grasp. While the EAV concept is not unique to Magento, it is rarely implemented on modern systems. Additionally, a Magento implementation is not a simple one. (For more resources related to this topic, see here.) What is EAV? In order to understand what EAV is and what its role within Magento is, we need to break down parts of the EAV model: Entity: This represents the data items (objects) inside Magento products, customers, categories, and orders. Each entity is stored in the database with a unique ID. Attribute: These are our object properties. Instead of having one column per attribute on the product table, attributes are stored on separate sets of tables. Value: As the name implies, it is simply the value link to a particular attribute. This data model is the secret behind Magento's flexibility and power, allowing entities to add and remove new properties without having to make any changes to the code, templates, or the database schema. This model can be seen as a vertical way of growing our database (new attributes and more rows), while the traditional model involves a horizontal growth pattern (new attributes and more columns), which would result in a schema redesign every time new attributes are added. The EAV model not only allows for the fast evolution of our database, but is also more effective because it only works with non-empty attributes, avoiding the need to reserve additional space in the database for null values. If you are interested in exploring and learning more about the Magento database structure, I highly recommend visiting www.magereverse.com. Adding a new product attribute is as simple going to the Magento backend and specifying the new attribute type, be it color, size, brand, or anything else. The opposite is true as well and we can get rid of unused attributes on our products or customer models. For more information on managing attributes, visit http://www.magentocommerce.com/knowledge-base/entry/how-do-attributes-work-in-magento. The Magento community edition currently has eight different types of EAV objects: Customer Customer Address Products Product Categories Orders Invoices Credit Memos Shipments The Magento Enterprise Edition has one additional type called RMA item, which is part of the Return Merchandise Authorization (RMA) system. All this flexibility and power is not free; there is a price to pay. Implementing the EAV model results in having our entity data distributed on a large number of tables. For example, just the Product Model is distributed to around 40 different tables. The following diagram only shows a few of the tables involved in saving the information of Magento products: Other major downsides of EAV are the loss of performance while retrieving large collections of EAV objects and an increase in the database query complexity. As the data is more fragmented (stored in more tables), selecting a single record involves several joins. One way Magento works around this downside of EAV is by making use of indexes and flat tables. For example, Magento can save all the product information into the flat_catalog table for easier and faster access. Let's continue using Magento products as our example and manually build the query to retrieve a single product. If you have phpmyadmin or MySQL Workbench installed on your development environment, you can experiment with the following queries. Each can be downloaded on the PHPMyAdmin website at http://www.phpmyadmin.net/ and the MySQL Workbench website at http://www.mysql.com/products/workbench/. The first table that we need to use is the catalog_product_entity table. We canconsider this our main product EAV table since it contains the main entity records for our products: Let's query the table by running the following SQL query: SELECT FROM `catalog_product_entity`; The table contains the following fields: entity_id: This is our product unique identifier that is used internally by Magento. entity_type_id: Magento has several different types of EAV models. Products, customers, and orders are just some of them. Identifying each of these by type allows Magento to retrieve the attributes and values from the appropriate tables. attribute_set_id: Product attributes can be grouped locally into attribute sets. Attribute sets allow even further flexibility on the product structure as products are not forced to use all available attributes. type_id: There are several different types of products in Magento: simple, configurable, bundled, downloadable, and grouped products; each with unique settings and functionality. sku: This stands for Stock Keeping Unit and is a number or code used to identify each unique product or item for sale in a store. This is a user-defined value. has_options: This is used to identify if a product has custom options. required_options: This is used to identify if any of the custom options that are required. created_at: This is the row creation date. updated_at: This is the last time the row was modified. Now we have a basic understanding of the product entity table. Each record represents a single product in our Magento store, but we don't have much information about that product beyond the SKU and the product type. So, where are the attributes stored? And how does Magento know the difference between a product attribute and a customer attribute? For this, we need to take a look into the eav_attribute table by running the following SQL query: SELECT FROM `eav_attribute`; As a result, we will not only see the product attributes, but also the attributes corresponding to the customer model, order model, and so on. Fortunately, we already have a key to filter the attributes from this table. Let's run the following query: SELECT FROM `eav_attribute` WHERE entity_type_id = 4; This query tells the database to only retrieve the attributes where the entity_type_id column is equal to the product entity_type_id(4). Before moving, let's analyze the most important fields inside the eav_attribute table: attribute_id: This is the unique identifier for each attribute and primary key of the table. entity_type_id: This relates each attribute to a specific eav model type. attribute_code: This is the name or key of our attribute and is used to generate the getters and setters for our magic methods. backend_model: These manage loading and storing data into the database. backend_type: This specifies the type of value stored in the backend (database). backend_table: This is used to specify if the attribute should be stored on a special table instead of the default EAV table. frontend_model: These handle the rendering of the attribute element into a web browser. frontend_input: Similar to the frontend model, the frontend input specifies the type of input field the web browser should render. frontend_label: This is the label/name of the attribute as it should be rendered by the browser. source_model: These are used to populate an attribute with possible values. Magento comes with several predefined source models for countries, yes or no values, regions, and so on. Retrieving the data At this point, we have successfully retrieved a product entity and the specific attributes that apply to that entity. Now it's time to start retrieving the actual values. In order to simplify the example (and the query) a little, we will only try to retrieve the name attribute of our products. How do we know which table our attribute values are stored on? Well, thankfully, Magento follows a naming convention to name the tables. If we inspect our database structure, we will notice that there are several tables using the catalog_product_entity prefix: catalog_product_entity catalog_product_entity_datetime catalog_product_entity_decimal catalog_product_entity_int catalog_product_entity_text catalog_product_entity_varchar catalog_product_entity_gallery catalog_product_entity_media_gallery catalog_product_entity_tier_price Wait! How do we know which is the right table to query for our name attribute values? If you were paying attention, I already gave you the answer. Remember that the eav_attribute table had a column called backend_type? Magento EAV stores each attribute on a different table based on the backend type of that attribute. If we want to confirm the backend type of our name attribute, we can do so by running the following code: SELECT FROM `eav_attribute` WHERE `entity_type_id` =4 AND `attribute_code` = 'name'; As a result, we should see that the backend type is varchar and that the values for this attribute are stored in the catalog_product_entity_varchar table. Let's inspect this table: The catalog_product_entity_varchar table is formed by only 6 columns: value_id: This is the attribute value unique identifier and primary key entity_type_id: This is the entity type ID to which this value belongs attribute_id: This is the foreign key that relates the value to our eav_entity table store_id: This is the foreign key matching an attribute value with a storeview entity_id: This is the foreign key relating to the corresponding entity table, in this case, catalog_product_entity value: This is the actual value that we want to retrieve Depending on the attribute configuration, we can have it as a global value, meaning, it applies across all store views or a value per storeview. Now that we finally have all the tables that we need to retrieve the product information, we can build our query: SELECT p.entity_id AS product_id, var.value AS product_name, p.sku AS product_sku FROM catalog_product_entity p, eav_attribute eav, catalog_product_entity_varchar var WHERE p.entity_type_id = eav.entity_type_id AND var.entity_id = p.entity_id    AND eav.attribute_code = 'name'    AND eav.attribute_id = var.attribute_id From our query, we should see a result set with three columns, product_id, product_name, and product_sku. So let's step back for a second in order to get product names with SKUs with raw SQL. We had to write a five-line SQL query, and we only retrieved two values from our products, from one single EAV value table if we want to retrieve a numeric field such as price or a text-value-like product. If we didn't have an ORM in place, maintaining Magento would be almost impossible. Fortunately, we do have an ORM in place, and most likely, you will never need to deal with raw SQL to work with Magento. That said, let's see how we can retrieve the same product information by using the Magento ORM: Our first step is going to be to instantiate a product collection: $collection = Mage::getModel('catalog/product')->getCollection(); Then we will specifically tell Magento to select the name attribute: $collection->addAttributeToSelect('name'); Then, we will ask it to sort the collection by name: $collection->setOrder('name', 'asc'); Finally, we will tell Magento to load the collection: $collection->load(); The end result is a collection of all products in the store sorted by name. We can inspect the actual SQL query by running the following code: echo $collection->getSelect()->__toString(); In just three lines of code, we are telling Magento to grab all the products in the store, to specifically select the name, and finally order the products by name. The last line $collection->getSelect()->__toString(); allows to see the actual query that Magento is executing in our behalf. The actual query being generated by Magento is as follows: SELECT `e`.. IF( at_name.value_id >0, at_name.value, at_name_default.value ) AS `name` FROM `catalog_product_entity` AS `e` LEFT JOIN `catalog_product_entity_varchar` AS `at_name_default` ON (`at_name_default`.`entity_id` = `e`.`entity_id`) AND (`at_name_default`.`attribute_id` = '65') AND `at_name_default`.`store_id` =0 LEFT JOIN `catalog_product_entity_varchar` AS `at_name` ON ( `at_name`.`entity_id` = `e`.`entity_id` ) AND (`at_name`.`attribute_id` = '65') AND (`at_name`.`store_id` =1) ORDER BY `name` ASC As we can see, the ORM and the EAV models are wonderful tools that not only put a lot of power and flexibility in the hands of the developers, but they also do it in a way that is comprehensive and easy to use. Summary In this article, we learned about EAV models and how they are structured to provide Magento with data flexibility and extensibility that both merchants and developers can take advantage of. Resources for Article: Further resources on this subject: Creating a Shipping Module [article] Preparing and Configuring Your Magento Website [article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 18756

article-image-hands-prezi-mechanics
Packt
10 Aug 2015
8 min read
Save for later

Hands-on with Prezi Mechanics

Packt
10 Aug 2015
8 min read
In this In this article by J.J. Sylvia IV, author of the book Mastering Prezi for Business Presentations - Second Edition, we will see how to edit a figure and to style symbols. Also we will see the Grouping feature and brief introduction of the Prezi text editor. (For more resources related to this topic, see here.) Editing lines When editing lines or arrows, you can change them from being straight to curved by dragging the center point in any direction: This is extremely useful when creating the line drawings we saw earlier. It's also useful to get arrows pointing at various objects on your canvas: Styled symbols If you're on a tight deadline, or trying to create drawings with shapes simply isn't for you, then the styles available in Prezi may be of more interest to you. These are common symbols that Prezi has created in a few different styles that can be easily inserted into any of your presentations. You can select these from the same Symbols & shapes… option from the Insert menu where we found the symbols. You'll see several different styles to choose from on the right-hand side of your screen. Each of these categories has similar symbols, but styled differently. There is a wide variety of symbols available ranging from people to social media logos. You can pick a style that best matches your theme or the atmosphere you've created for your presentation. Instead of creating your own person from shapes, you can select from a variety of people symbols available: Although these symbols can be very handy, you should be aware that you can't edit them as part of your presentation. If you decide to use one, note that it will work as it is—there are no new hairstyles for these symbols. Highlighter The highlighter tool is extremely useful for pointing out key pieces of information such as an interesting fact. To use it, navigate to the Insert menu and select the Highlighter option. Then, just drag the cursor across the text you'd like to highlight. Once you've done this, the highlighter marks become objects in their own right, so you can click on them to change their size or position just as you would do for a shape. To change the color of your highlighter, you will need to go into the Theme Wizard and edit the RGB values. We'll cover how to do this later when we discuss branding. Grouping Grouping is a great feature that allows you to move or edit several different elements of your presentation at once. This can be especially useful if you're trying to reorganize the layout of your Prezi after it's been created, or to add animations to several elements at once. Let's go back to the drawing we created earlier to see how this might work: The first way to group items is to hold down the Ctrl key (Command on Mac OS) and to left-click on each element you want to group individually. In this case, I need to click on each individual line that makes up the flat top hair in the preceding image. This might be necessary if I only want to group the hair, for example: Another method for grouping is to hold down the Shift key while dragging your mouse to select multiple items at once. In the preceding screenshot, I've selected my entire person at once. Now, I can easily rotate, resize, or move the entire person at once, without having to move each individual line or shape. If you select a group of objects, move them, and then realize that a piece is missing because it didn't get selected, just press the Ctrl+Z (Command+Z on Mac OS) keys on your keyboard to undo the move. Then, broaden your selection and try again. Alternatively, you can hold down the Shift key and simply click on the piece you missed to add it to the group. If we want to keep these elements grouped together instead of having to reselect them each time we decide to make a change, we can click on the Group button that appears with this change. Now these items will stay grouped unless we click on the new Ungroup button, now located in the same place as the Group button previously was: You can also use frames to group material together. If you already created frames as part of your layout, this might make the grouping process even easier. Prezi text editor Over the years, the Prezi text editor has evolved to be quite robust, and it's now possible to easily do all of your text editing directly within Prezi. Spell checker When you spell something incorrectly, Prezi will underline the word it doesn't recognize with a red line. This is just as you would see it in Microsoft Word or any other text editor. To correct the word, simply right-click on it (or Command + Click on Mac OS) and select the word you meant to type from the suggestions, as shown in the following screenshot: The text drag-apart feature So a colleague of yours has just e-mailed you the text that they want to appear in the Prezi you're designing for them? That's great news as it'll help you understand the flow of the presentation. What's frustrating, though, is that you'll have to copy and paste every single line or paragraph across to put it in the right place on your canvas. At least, that used to be the case before Prezi introduced the drag-apart feature in the text editor. This means you can now easily drag a selection of text anywhere on your canvas without having to rely on the copy and paste options. Let's see how we can easily change the text we spellchecked previously, as shown in the following screenshot: In order to drag your text apart, simply highlight the area you require, hold the mouse button down, and then drag the text anywhere on your canvas. Once you have separated your text, you can then edit the separate parts as you would edit any other individual object on your canvas. In this example, we can change the size of the company name and leave the other text as it is, which we couldn't do within a single textbox: Building Prezis for colleagues If you've kindly offered to build a Prezi for one of your colleagues, ask them to supply the text for it in Word format. You'll be able to run a spellcheck on it from there before you copy and paste it into Prezi. Any bad spellings you miss will also get highlighted on your Prezi canvas but it's good to use both options as a safety net. Font colors Other than dragging text apart to make it stand out more on its own, you might want to highlight certain words so that they jump out at your audience even more. The great news is that you can now highlight individual lines of text or single words and change their color. To do so, just highlight a word by clicking and dragging your mouse across it. Then, click on the color picker at the top of the textbox to see the color menu, as shown in the following screenshot: Select any of the colors available in the palette to change the color of that piece of text. Nothing else in the textbox will be affected apart from the text you have selected. This gives you much greater freedom to use colored text in your Prezi design, and doesn't leave you restricted as in older versions of the software. Choose the right color To make good use of this feature, we recommend that you use a color that completely contrasts to the rest of your design. For example, if your design and corporate colors are blue, we suggest you use red or purple to highlight key words. Also, once you pick a color, stick to it throughout the presentation so that your audience knows when they see a key piece of information. Bullet points and indents Bullets and indents make it much easier to put together your business presentations and helps to give the audience some short, simple information as text in the same format they're used to seeing in other presentations. This can be done by simply selecting the main body of text and clicking on the bullet point icon at the top of the textbox. This is a really simple feature, but a useful one nonetheless. We'd obviously like to point out that too much text on any presentation is a bad thing. Keep it short and to the point. Also, remember that too many bullets can kill a presentation. Summary In this article, we discussed the basic mechanics of Prezi. Learning to combine these tools in creative ways will help you move from a Prezi novice to master. Shapes can be used creatively to create content and drawings, and can be grouped together for easy movement and editing. Prezi also features basic text editing which are explained in this article. Resources for Article: Further resources on this subject: Turning your PowerPoint into a Prezi [Article] The Fastest Way to Go from an Idea to a Prezi [Article] Using Prezi - The Online Presentation Software Tool [Article]
Read more
  • 0
  • 0
  • 923

article-image-controls-and-widgets
Packt
10 Aug 2015
25 min read
Save for later

Controls and Widgets

Packt
10 Aug 2015
25 min read
In this article by Chip Lambert and Shreerang Patwardhan, author of the book, Mastering jQuery Mobile, we will take our Civic Center application to the next level and in the process of doing so, we will explore different widgets. We will explore the touch events provided by the jQuery Mobile framework further and then take a look at how this framework interacts with third-party plugins. We will be covering the following different widgets and topics in this article: Collapsible widget Listview widget Range slider widget Radio button widget Touch events Third-party plugins HammerJs FastClick Accessibility (For more resources related to this topic, see here.) Widgets We already made use of widgets as part of the Civic Center application. "Which? Where? When did that happen? What did I miss?" Don't panic as you have missed nothing at all. All the components that we use as part of the jQuery Mobile framework are widgets. The page, buttons, and toolbars are all widgets. So what do we understand about widgets from their usage so far? One thing is pretty evident, widgets are feature-rich and they have a lot of things that are customizable and that can be tweaked as per the requirements of the design. These customizable things are pretty much the methods and events that these small plugins offer to the developers. So all in all: Widgets are feature rich, stateful plugins that have a complete lifecycle, along with methods and events. We will now explore a few widgets as discussed before and we will start off with the collapsible widget. A collapsible widget, more popularly known as the accordion control, is used to display and style a cluster of related content together to be easily accessible to the user. Let's see this collapsible widget in action. Pull up the index.html file. We will be adding the collapsible widget to the facilities page. You can jump directly to the content div of the facilities page. We will replace the simple-looking, unordered list and add the collapsible widget in its place. Add the following code in place of the <ul>...<li></li>...</ul> portion: <div data-role="collapsibleset"> <div data-role="collapsible"> <h3>Banquet Halls</h3> <p>List of banquet halls will go here</p> </div> <div data-role="collapsible"> <h3>Sports Arena</h3> <p>List of sports arenas will go here</p> </div> <div data-role="collapsible">    <h3>Conference Rooms</h3> <p>List of conference rooms will come here</p> </div> <div data-role="collapsible"> <h3>Ballrooms</h3> <p>List of ballrooms will come here</p> </div> </div> That was pretty simple. As you must have noticed, we are creating a group of collapsibles defined by div with data-role="collapsibleset". Inside this div, we have multiple div elements each with data-role of "collapsible". These data roles instruct the framework to style div as a collapsible. Let's break individual collapsibles further. Each collapsible div has to have a heading tag (h1-h6), which acts as the title for that collapsible. This heading can be followed by any HTML structure that is required as per your application's design. In our application, we added a paragraph tag with some dummy text for now. We will soon be replacing this text with another widget—listview. Before we proceed to look at how we will be doing this, let's see what the facilities page is looking like right now: Now let's take a look at another widget that we will include in our project—the listview widget. The listview widget is a very important widget from the mobile website stand point. The listview widget is highly customizable and can play an important role in the navigation system of your web application as well. In our application, we will include listview within the collapsible div elements that we have just created. Each collapsible will hold the relevant list items which can be linked to a detailed page for each item. Without further discussion, let's take a look at the following code. We have replaced the contents of the first collapsible list item within the paragraph tag with the code to include the listview widget. We will break up the code and discuss the minute details later: <div data-role="collapsible"> <h3>Banquet Halls</h3> <p> <span>We have 3 huge banquet halls named after 3 most celebrated Chef's from across the world.</span> <ul data-role="listview" data-inset="true"> <li> <a href="#">Gordon Ramsay</a> </li> <li> <a href="#">Anthony Bourdain</a> </li> <li> <a href="#">Sanjeev Kapoor</a> </li> </ul> </p> </div> That was pretty simple, right? We replaced the dummy text from the paragraph tag with a span that has some details concerning what that collapsible list is about, and then we have an unordered list with data-role="listview" and some property called data-inset="true". We have seen several data-roles before, and this one is no different. This data-role attribute informs the framework to style the unordered list, such as a tappable button, while a data-inset property informs the framework to apply the inset appearance to the list items. Without this property, the list items would stretch from edge to edge on the mobile device. Try setting the data-inset property to false or removing the property altogether. You will see the results for yourself. Another thing worth noticing in the preceding code is that we have included an anchor tag within the li tags. This anchor tag informs the framework to add a right arrow icon on the extreme right of that list item. Again, this icon is customizable, along with its position and other styling attributes. Right now, our facilities page should appear as seen in the following image: We will now add similar listview widgets within the remaining three collapsible items. The content for the next collapsible item titled Sports Arena should be as follows. Once added, this collapsible item, when expanded, should look as seen in the screenshot that follows the code: <div data-role="collapsible">    <h3>Sports Arena</h3>    <p>        <span>We have 3 huge sport arenas named after 3 most celebrated sport personalities from across the world.       </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Sachin Tendulkar</a>            </li>            <li>                <a href="#">Roger Federer</a>            </li>            <li>                <a href="#">Usain Bolt</a>            </li>        </ul>    </p> </div> The code for the listview widgets that should be included in the next collapsible item titled Conference Rooms. Once added, this collapsible, item when expanded, should look as seen in the image that follows the code: <div data-role="collapsible">    <h3>Conference Rooms</h3>    <p>        <span>            We have 3 huge conference rooms named after 3 largest technology companies.        </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Google</a>            </li>            <li>                <a href="#">Twitter</a>            </li>            <li>                <a href="#">Facebook</a>            </li>        </ul>    </p> </div> The final collapsible list item – Ballrooms – should hold the following code, to include its share of the listview items: <div data-role="collapsible">    <h3>Ballrooms</h3>    <p>        <span>            We have 3 huge ball rooms named after 3 different dance styles from across the world.        </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Ballet</a>            </li>            <li>                <a href="#">Kathak</a>            </li>            <li>                <a href="#">Paso Doble</a>            </li>        </ul>    </p> </div> After adding these listview items, our facilities page should look as seen in the following image: The facilities page now looks much better than it did earlier, and we now understand a couple more very important widgets available in jQuery Mobile—the collapsible widget and the listview Widget. We will now explore two form widgets – slider widget and the radio buttons widget. For this, we will be enhancing our catering page. Let's build a simple tool that will help the visitors of this site estimate the food expense based on the number of guests and the type of cuisine that they choose. Let's get started then. First, we will add the required HTML, to include the slider widget and the radio buttons widget. Scroll down to the content div of the catering page, where we have the paragraph tag containing some text about the Civic Center's catering services. Add the following code after the paragraph tag: <form>    <label style="font-weight: bold; padding: 15px 0px;" for="slider">Number of guests</label>    <input type="range" name="slider" id="slider" data-highlight="true" min="50" max="1000" value="50">    <fieldset data-role="controlgroup" id="cuisine-choices">        <legend style="font-weight: bold; padding: 15px 0px;">Choose your cuisine</legend>        <input type="radio" name="cuisine-choice" id="cuisine-choice-cont" value="15" checked="checked" />        <label for="cuisine-choice-cont">Continental</label>        <input type="radio" name="cuisine-choice" id="cuisine-choice-mex" value="12" />        <label for="cuisine-choice-mex">Mexican</label>        <input type="radio" name="cuisine-choice" id="cuisine-choice-ind" value="14" />        <label for="cuisine-choice-ind">Indian</label>    </fieldset>    <p>        The approximate cost will be: <span style="font-weight: bold;" id="totalCost"></span>    </p> </form> That is not much code, but we are adding and initializing two new form widgets here. Let's take a look at the code in detail: <label style="font-weight: bold; padding: 15px 0px;" for="slider">Number of guests</label> <input type="range" name="slider" id="slider" data-highlight="true" min="50" max="1000" value="50"> We are initializing our first form widget here—the slider widget. The slider widget is an input element of the type range, which accepts a minimum value and maximum value and a default value. We will be using this slider to accept the number of guests. Since the Civic Center can cater to a maximum of 1,000 people, we will set the maximum limit to 1,000 and we expect that we have at least 50 guests, so we set a minimum value of 50. Since the minimum number of guests that we cater for is 50, we set the input's default value to 50. We also set the data-highlight attribute value to true, which informs the framework that the selected area on the slider should be highlighted. Next comes the group of radio buttons. The most important attribute to be considered here is the data-role="controlgroup" set on the fieldset element. Adding this data-role combines the radio buttons into one single group, which helps inform the user that one of the radio buttons is to be selected. This gives a visual indication to the user that one radio button out of the whole lot needs to be selected. The values assigned to each of the radio inputs here indicate the cost per person for that particular cuisine. This value will help us calculate the final dollar value for the number of selected guests and the type of cuisine. Whenever you are using the form widgets, make sure you have the form elements in the hierarchy as required by the jQuery Mobile framework. When the elements are in the required hierarchy, the framework can apply the required styles. At the end of the previous code snippet, we have a paragraph tag where we will populate the approximate cost of catering for the selected number of guests and the type of cuisine selected. The catering page should now look as seen in the following image. Right now, we only have the HTML widgets in place. When you drag the slider or select different radio buttons, you will only see the UI interactions of these widgets and the UI treatments that the framework applies to these widgets. However, the total cost will not be populated yet. We will need to write some JavaScript logic to determine this value, and we will take a look at this in a minute. Before moving to the JavaScript part, make sure you have all the code that is needed: Now let's take a look at the magic part of the code (read JavaScript) that is going to make our widgets usable for the visitors of this Civic Center web application. Add the following JavaScript code in the script tag at the very end of our index.html file: $(document).on('pagecontainershow', function(){    var guests = 50;    var cost = 35;    var totalCost;    $("#slider").on("slidestop", function(event, ui){        guests = $('#slider').val();        totalCost = costCal();        $("#totalCost").text("$" + totalCost);    });    $("input:radio[name=cuisine-choice]").on("click", function() {        cost = $(this).val();        var totalCost = costCal();        $("#totalCost").text("$" + totalCost);    });    function costCal(){        return guests * cost;    } }); That is a pretty small chunk of code and pretty simple too. We will be looking at a few very important events that are part of the framework and that come in very handy when developing web applications with jQuery Mobile. One of the most important things that you must have already noticed is that we are not making use of the customary $(document).on('ready', function(){ in Jquery, but something that looks as the following code: $(document).on('pagecontainershow', function(){ The million dollar question here is "why doesn't DOM already work in jQuery Mobile?" As part of jQuery, the first thing that we often learn to do is execute our jQuery code as soon as the DOM is ready, and this is identified using the $(document).ready function. In jQuery Mobile, pages are requested and injected into the same DOM as the user navigates from one page to another and so the DOM ready event is as useful as it executes only for the first page. Now we need an event that should execute when every page loads, and $(document).pagecontainershow is the one. The pagecontainershow element is triggered on the toPage after the transition animation has completed. The pagecontainershow element is triggered on the pagecontainer element and not on the actual page. In the function, we initialize the guests and the cost variables to 50 and 35 respectively, as the minimum number of guests we can have is 50 and the "Continental" cuisine is selected by default, which has a value of 35. We will be calculating the estimated cost when the user changes the number of guests or selects a different radio button. This brings us to the next part of our code. We need to get the value of the number of guests as soon as the user stops sliding the slider. jQuery Mobile provides us with the slidestop event for this very purpose. As soon as the user stops sliding, we get the value of the slider and then call the costCal function, which returns a value that is the number of guests multiplied by the cost of the selected cuisine per person. We then display this value in the paragraph at the bottom for the user to get an estimated cost. We will discuss some more about the touch events that are available as part of the jQuery Mobile framework in the next section. When the user selects a different radio button, we retrieve the value of the selected radio button, call the costCal function again, and update the value displayed in the paragraph at the bottom of our page. If you have the code correct and your functions are all working fine, you should see something similar to the following image: Input with touch We will take a look at a couple of touch events, which are tap and taphold. The tap event is triggered after a quick touch; whereas the taphold event is triggered after a sustained, long press touch. The jQuery Mobile tap event is the gesture equivalent of the standard click event that is triggered on the release of the touch gesture. The following snippet of code should help you incorporate the tap event when you need to use it in your application: $(".selector").on("tap", function(){    console.log("tap event is triggered"); }); The jQuery Mobile taphold event triggers after a sustained, complete touch event, which is more commonly known as the long press event. The taphold event fires when the user taps and holds for a minimum of 750 milliseconds. You can also change the default value, but we will come to that in a minute. First, let's see how the taphold event is used: $(".selector").on("taphold", function(){    console.log("taphold event is triggered"); }); Now to change the default value for the long press event, we need to set the value for the following piece of code: $.event.special.tap.tapholdThreshold Working with plugins A number of times, we will come across scenarios where the capabilities of the framework are just not sufficient for all the requirements of your project. In such scenarios, we have to make use of third-party plugins in our project. We will be looking at two very interesting plugins in the course of this article, but before that, you need to understand what jQuery plugins exactly are. A jQuery plugin is simply a new method that has been used to extend jQuery's prototype object. When we include the jQuery plugin as part of our code, this new method becomes available for use within your application. When selecting jQuery plugins for your jQuery Mobile web application, make sure that the plugin is optimized for mobile devices and incorporates touch events as well, based on your requirements. The first plugin that we are going to look at today is called FastClick and is developed by FT Labs. This is an open source plugin and so can be used as part of your application. FastClick is a simple, easy-to-use library designed to eliminate the 300 ms delay between a physical tap and the firing on the click event on mobile browsers. Wait! What are we talking about? What is this 300 ms delay between tap and click? What exactly are we discussing? Sure. We understand the confusion. Let's explain this 300 ms delay issue. The click events have a 300 ms delay on touch devices, which makes web applications feel laggy on a mobile device and doesn't give users a native-like feel. If you go to a site that isn't mobile-optimized, it starts zoomed out. You have to then either pinch and zoom or double tap some content so that it becomes readable. The double-tap is a performance killer, because with every tap we have to wait to see whether it might be a double tap—and this wait is 300 ms. Here is how it plays out: touchstart touchend Wait 300ms in case of another tap click This pause of 300 ms applies to click events in JavaScript, but also other click-based interactions such as links and form controls. Most mobile web browsers out there have this 300 ms delay on the click events, but now a few modern browsers such as Chrome and FireFox for Android and iOS are removing this 300 ms delay. However, if you are supporting the older Android and iOS versions, with older mobile browsers, you might want to consider including the FastClick plugin in your application, which helps resolve this problem. Let's take a look at how we can use this plugin in any web application. First, you need to download the plugin files, or clone their GitHub repository here: https://github.com/ftlabs/fastclick. Once you have done that, include a reference to the plugin's JavaScript file in your application: <script type="application/javascript" src="path/fastclick.js"></script> Make sure that the script is loaded prior to instantiating FastClick on any element of the page. FastClick recommends you to instantiate the plugin on the body element itself. We can do this using the following piece of code: $(function){    FastClick.attach(document.body); } That is it! Your application is now free of the 300 ms click delay issue and will work as smooth as a native application. We have just provided you with an introduction to the FastClick plugin. There are several more features that this plugin provides. Make sure you visit their website—https://github.com/ftlabs/fastclick—for more details on what the plugin has to offer. Another important plugin that we will look at is HammerJs. HammerJs, again is an open source library that helps recognize gestures made by touch, mouse, and pointerEvents. Now, you would say that the jQuery Mobile framework already takes care of this, so why do we need a third-party plugin again? True, jQuery Mobile supports a variety of touch events such as tap, tap and hold, and swipe, as well as the regular mouse events, but what if in our application we want to make use of some touch gestures such as pan, pinch, rotate, and so on, which are not supported by jQuery Mobile by default? This is where HammerJs comes into the picture and plays nicely along with jQuery Mobile. Including HammerJS in your web application code is extremely simple and straightforward, like the FastClick plugin. You need to download the plugin files and then add a reference to the plugin JavaScript file: <script type="application/javascript" src="path/hammer.js"></script> Once you have included the plugin, you need to create a new instance on the Hammer object and then start using the plugin for all the touch gestures you need to support: var hammerPan = new Hammer(element_name, options); hammerPan.on('pan', function(){    console.log("Inside Pan event"); }); By default, Hammer adds a set of events—tap, double tap, swipe, pan, press, pinch, and rotate. The pinch and rotate recognizers are disabled by default, but can be turned on as and when required. HammerJS offers a lot of features that you might want to explore. Make sure you visit their website—http://hammerjs.github.io/ to understand the different features the library has to offer and how you can integrate this plugin within your existing or new jQuery Mobile projects. Accessibility Most of us today cannot imagine our lives without the Internet and our smartphones. Some will even argue that the Internet is the single largest revolutionary invention of all time that has touched numerous lives across the globe. Now, at the click of a mouse or the touch of your fingertip, the world is now at your disposal, provided you can use the mouse, see the screen, and hear the audio—impairments might make it difficult for people to access the Internet. This makes us wonder about how people with disabilities would use the Internet, their frustration in doing so, and the efforts that must be taken to make websites accessible to all. Though estimates vary on this, most studies have revealed that about 15% of the world's population have some kind of disability. Not all of these people would have an issue with accessing the web, but let's assume 5% of these people would face a problem in accessing the web. This 5% is also a considerable amount of users, which cannot be ignored by businesses on the web, and efforts must be taken in the right direction to make the web accessible to these users with disabilities. jQuery Mobile framework comes with built-in support for accessibility. jQuery Mobile is built with accessibility and universal access in mind. Any application that is built using jQuery Mobile is accessible via the screen reader as well. When you make use of the different jQuery Mobile widgets in your application, unknowingly you are also adding support for web accessibility into your application. jQuery Mobile framework adds all the necessary aria attributes to the elements in the DOM. Let's take a look at how the DOM looks for our facilities page: Look at the highlighted Events button in the top right corner and its corresponding HTML (also highlighted) in the developer tools. You will notice that there are a few attributes added to the anchor tag that start with aria-. We did not add any of these aria- attributes when we wrote the code for the Events button. jQuery Mobile library takes care of these things for you. The accessibility implementation is an ongoing process and the awesome developers at jQuery Mobile are working towards improving the support every new release. We spoke about aria- attributes, but what do they really represent? WAI - ARIA stands for Web Accessibility Initiative – Accessible Rich Internet Applications. This was a technical specification published by the World Wide Web Consortium (W3C) and basically specifies how to increase the accessibility of web pages. ARIA specifies the roles, properties, and states of a web page that make it accessible to all users. Accessibility is extremely vast, hence covering every detail of it is not possible. However, there is excellent material available on the Internet on this topic and we encourage you to read and understand this. Try to implement accessibility into your current or next project even if it is not based on jQuery Mobile. Web accessibility is an extremely important thing that should be considered, especially when you are building web applications that will be consumed by a huge consumer base—on e-commerce websites for example. Summary In this article, we made use of some of the available widgets from the jQuery Mobile framework and we built some interactivity into our existing Civic Center application. The widgets that we used included the range slider, the collapsible widget, the listview widget, and the radio button widget. We evaluated and looked at how to use two different third-party plugins—FastClick and HammerJs. We concluded the article by taking a look at the concept of Web Accessibility. Resources for Article: Further resources on this subject: Creating Mobile Dashboards [article] Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 3200

article-image-nltk-hackers
Packt
07 Aug 2015
9 min read
Save for later

NLTK for hackers

Packt
07 Aug 2015
9 min read
In this article written by Nitin Hardeniya, author of the book NLTK Essentials, we will learn that "Life is short, we need Python" that's the mantra I follow and truly believe in. As fresh graduates, we learned and worked mostly with C/C++/JAVA. While these languages have amazing features, Python has a charm of its own. The day I started using Python I loved it. I really did. The big coincidence here is that I finally ended up working with Python during my initial projects on the job. I started to love the kind of datastructures, Libraries, and echo system Python has for beginners as well as for an expert programmer. (For more resources related to this topic, see here.) Python as a language has advanced very fast and spatially. If you are a Machine learning/ Natural language Processing enthusiast, then Python is 'the' go-to language these days. Python has some amazing ways of dealing with strings. It has a very easy and elegant coding style, and most importantly a long list of open libraries. I can go on and on about Python and my love for it. But here I want to talk about very specifically about NLTK (Natural Language Toolkit), one of the most popular Python libraries for Natural language processing. NLTK is simply awesome, and in my opinion,it's the best way to learn and implement some of the most complex NLP concepts. NLTK has variety of generic text preprocessing tool, such as Tokenization, Stop word removal, Stemming, and at the same time,has some very NLP-specific tools,such as Part of speech tagging, Chunking, Named Entity recognition, and dependency parsing. NLTK provides some of the easiest solutions to all the above stages of NLP and that's why it is the most preferred library for any text processing/ text mining application. NLTK not only provides some pretrained models that can be applied directly to your dataset, it also provides ways to customize and build your own taggers, tokenizers, and so on. NLTK is a big library that has many tools available for an NLP developer. I have provided a cheat-sheet of some of the most common steps and their solutions using NLTK. In our book, NLTK Essentials, I have tried to give you enough information to deal with all these processing steps using NLTK. To show you the power of NLTK, let's try to develop a very easy application of finding topics in the unstructured text in a word cloud. Word CloudNLTK Instead of going further into the theoretical aspects of natural language processing, let's start with a quick dive into NLTK. I am going to start with some basic example use cases of NLTK. There is a good chance that you have already done something similar. First, I will give a typical Python programmer approach and then move on to NLTK for a much more efficient, robust, and clean solution. We will start analyzing with some example text content: >>>import urllib2>>># urllib2 is use to download the html content of the web link>>>response = urllib2.urlopen('http://python.org/')>>># You can read the entire content of a file using read() method>>>html = response.read()>>>print len(html)47020 For the current example, I have taken the content from Python's home page: https://www.python.org/. We don't have any clue about the kind of topics that are discussed in this URL, so let's say that we want to start an exploratory data analysis (EDA). Typically in a text domain, EDA can have many meanings, but will go with a simple case of what kinds of terms dominate the documents. What are the topics? How frequent are they? The process will involve some level of preprocessing we will try to do this in a pure Python wayand then we will do it using NLTK. Let's start with cleaning the html tags. One way to do this is to select just tokens, including numbers and character. Anybody who has worked with regular expression should be able to convert html string into a list of tokens: >>># regular expression based split the string>>>tokens = [tok for tok in html.split()]>>>print "Total no of tokens :"+ str(len(tokens))>>># first 100 tokens>>>print tokens[0:100]Total no of tokens :2860['<!doctype', 'html>', '<!--[if', 'lt', 'IE', '7]>', '<html', 'class="no-js', 'ie6', 'lt-ie7', 'lt-ie8', 'lt-ie9">', '<![endif]-->', '<!--[if', 'IE', '7]>', '<html', 'class="no-js', 'ie7', 'lt-ie8', 'lt-ie9">', '<![endif]-->', ''type="text/css"', 'media="not', 'print,', 'braille,' ...] As you can see, there is an excess of html tags and other unwanted characters when we use the preceding method. A cleaner version of the same task will look something like this: >>>import re>>># using the split function https://docs.python.org/2/library/re.html>>>tokens = re.split('W+',html)>>>print len(tokens)>>>print tokens[0:100]5787['', 'doctype', 'html', 'if', 'lt', 'IE', '7', 'html', 'class', 'no', 'js', 'ie6', 'lt', 'ie7', 'lt', 'ie8', 'lt', 'ie9', 'endif', 'if', 'IE', '7', 'html', 'class', 'no', 'js', 'ie7', 'lt', 'ie8', 'lt', 'ie9', 'endif', 'if', 'IE', '8', 'msapplication', 'tooltip', 'content', 'The', 'official', 'home', 'of', 'the', 'Python', 'Programming', 'Language', 'meta', 'name', 'apple' ...] This looks much cleaner now. But still you can do more; I leave it to you to try to remove as much noise as you can. You can still look for word length as a criteria and remove words that have a length one—it will remove elements,such as 7, 8, and so on, which are just noise in this case. Now let's go to NLTK for the same task. There is a function called clean_html() that can do all the work we were looking for: >>>import nltk>>># http://www.nltk.org/api/nltk.html#nltk.util.clean_html>>>clean = nltk.clean_html(html)>>># clean will have entire string removing all the html noise>>>tokens = [tok for tok in clean.split()]>>>print tokens[:100]['Welcome', 'to', 'Python.org', 'Skip', 'to', 'content', '&#9660;', 'Close', 'Python', 'PSF', 'Docs', 'PyPI', 'Jobs', 'Community', '&#9650;', 'The', 'Python', 'Network', '&equiv;', 'Menu', 'Arts', 'Business' ...] Cool, right? This definitely is much cleaner and easier to do. No analysis in any EDA can start without distribution. Let's try to get the frequency distribution. First, let's do it the Python way, then I will tell you the NLTK recipe. >>>import operator>>>freq_dis={}>>>for tok in tokens:>>>    if tok in freq_dis:>>>        freq_dis[tok]+=1>>>    else:>>>        freq_dis[tok]=1>>># We want to sort this dictionary on values ( freq in this case )>>>sorted_freq_dist= sorted(freq_dis.items(), key=operator.itemgetter(1), reverse=True)>>> print sorted_freq_dist[:25][('Python', 55), ('>>>', 23), ('and', 21), ('to', 18), (',', 18), ('the', 14), ('of', 13), ('for', 12), ('a', 11), ('Events', 11), ('News', 11), ('is', 10), ('2014-', 10), ('More', 9), ('#', 9), ('3', 9), ('=', 8), ('in', 8), ('with', 8), ('Community', 7), ('The', 7), ('Docs', 6), ('Software', 6), (':', 6),  ('3:', 5), ('that', 5), ('sum', 5)] Naturally, as this is Python's home page, Python and the >>> interpreters are the most common terms, also giving a sense about the website. A better and efficient approach is to use NLTK's FreqDist() function. For this, we will take a look at the same code we developed before: >>>import nltk>>>Freq_dist_nltk=nltk.FreqDist(tokens)>>>print Freq_dist_nltk>>>for k,v in Freq_dist_nltk.items():>>>    print str(k)+':'+str(v)<FreqDist: 'Python': 55, '>>>': 23, 'and': 21, ',': 18, 'to': 18, 'the': 14, 'of': 13, 'for': 12, 'Events': 11, 'News': 11, ...>Python:55>>>:23and:21,:18to:18the:14of:13for:12Events:11News:11 Let's now do some more funky things. Let's plot this: >>>Freq_dist_nltk.plot(50, cumulative=False)>>># below is the plot for the frequency distributions We can see that the cumulative frequency is growing, and at words such as other and frequency 400, the curve is going into long tail. Still, there is some noise, and there are words such asthe, of, for, and =. These are useless words, and there is a terminology for these words. These words are stop words,such asthe, a, and an. Article pronouns are generally present in most of the documents; hence, they are not discriminative enough to be informative. In most of the NLP and information retrieval tasks, people generally remove stop words. Let's go back again to our running example: >>>stopwords=[word.strip().lower() for word in open("PATH/english.stop.txt")]>>>clean_tokens=[tok for tok in tokens if len(tok.lower())>1 and (tok.lower() not in stopwords)]>>>Freq_dist_nltk=nltk.FreqDist(clean_tokens)>>>Freq_dist_nltk.plot(50, cumulative=False) This looks much cleaner now! After finishing this much, you should be able to get something like this using word cloud: Please go to http://www.wordle.net/advanced for more word clouds. Summary To summarize, this article was intended to give you a brief introduction toNatural Language Processing. The book does assume some background in NLP andprogramming in Python, but we have tried to give a very quick head start to Pythonand NLP. Resources for Article: Further resources on this subject: Hadoop Monitoring and its aspects [Article] Big Data Analysis (R and Hadoop) [Article] SciPy for Signal Processing [Article]
Read more
  • 0
  • 0
  • 2823
article-image-bootstrap-box
Packt
07 Aug 2015
6 min read
Save for later

Bootstrap in a Box

Packt
07 Aug 2015
6 min read
In this article written by Snig Bhaumik, author of the book Bootstrap Essentails, we explain the concept of Bootstrap, responsive design patterns, navigation patterns, and the different components that are included in Bootstrap. (For more resources related to this topic, see here.) Responsive design patterns Here are the few established and well-adopted patterns in Responsive Web Design: Fluid design: This is the most popular and easiest option for responsive design. In this pattern, larger screen multiple columns layout renders as a single column in a smaller screen in absolutely same sequence. Column drop: In this pattern also, the page gets rendered in a single column; however, the order of blocks gets altered. That means, if a content block is visible first in order in case of a larger screen, that might be rendered as second or third in case of a smaller screen. Layout shifter: This is a complex but powerful pattern where the whole layout of the screen contents gets altered in case of a smaller screen. This means that you need to develop different page layouts for large, medium, and small screens. Navigation patterns You should take care of the following things while designing a responsive web page. These are essentially the major navigational elements that you would concentrate on while developing a mobile friendly and responsive website: Menu bar Navigation/app bar Footer Main container shell Images Tabs HTML forms and elements Alerts and popups Embedded audios and videos, and so on You can see that there are lots of elements and aspects you need to take care of to create a fully responsive design. While all of these are achieved by using various features and technologies in CSS3, it is of course not an easy problem to solve without a framework that could help you do so. Precisely, you need a frontend framework that takes care of all the pains of technical responsive design implementation and releases you only for your brand and application design. Now, we introduce Bootstrap that would help you design and develop a responsive web design in a much optimized and efficient way. Introducing Bootstrap Simply put, Bootstrap is a frontend framework for faster and easier web development in the new standard of mobile-first philosophy. It uses HTML, CSS, and JavaScript. In August 2010, Twitter released Bootstrap as Open Source. There are quite a few similar frontend frameworks available in the industry, but Bootstrap is arguably the most popular framework in the lot. It is evident when we see Bootstrap is the most starred project in GitHub since 2012. Until now, you must be in a position to fathom why and where we need to use Bootstrap for web development; however, just to recap, here are the points in short. The mobile-first approach A responsive design Automatic browser support and handling Easy to adapt and get going What Bootstrap includes The following diagram demonstrates the overall structure of Bootstrap: CSS Bootstrap comes with fundamental HTML elements styled, global CSS classes, classes for advanced grid patterns, and lots of enhanced and extended CSS classes. For example, this is how the HTML global element is configured in Bootstrap CSS: html { font-family: sans-serif; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; } This is how a standard HR HTML element is styled: hr { height: 0; -webkit-box-sizing: content-box; -moz-box-sizing: content-box; box-sizing: content-box; } Here is an example of new classes introduced in Bootstrap: .glyphicon { position: relative; top: 1px; display: inline-block; font-family: 'Glyphicons Halflings'; font-style: normal; font-weight: normal; line-height: 1; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } Components Bootstrap offers a rich set of reusable and built-in components, such as breadcrumbs, progress bars, alerts, and navigation bars. The components are technically custom CSS classes specially crafted for the specific purpose. For example, if you want to create a breadcrumb in your page, you simply add a DIV tag in your HTML using Bootstrap’s breadcrumb class: <ol class="breadcrumb"> <li><a href="#">Home</a></li> <li><a href="#">The Store</a></li> <li class="active">Offer Zone</li> </ol> In the background (stylesheet), this Bootstrap class is used to create your breadcrumb: .breadcrumb { padding: 8px 15px; margin-bottom: 20px; list-style: none; background-color: #f5f5f5; border-radius: 4px; } .breadcrumb > li { display: inline-block; } .breadcrumb > li + li:before { padding: 0 5px; color: #ccc; content: "/ 0a0"; } .breadcrumb > .active { color: #777; } Please note that these set of code blocks are simply snippets. JavaScript Bootstrap framework comes with a number of ready-to-use JavaScript plugins. Thus, when you need to create Popup windows, Tabs, Carousels or Tooltips, and so on, you just use one of the prepackaged JavaScript plugins. For example, if you need to create a tab control in your page, you use this: <div role="tabpanel"> <ul class="nav nav-tabs" role="tablist"> <li role="presentation" class="active"><a href="#recent" aria-controls="recent" role="tab" data-toggle="tab">Recent Orders</a></li> <li role="presentation"><a href="#all" aria-controls="al" role="tab" data-toggle="tab">All Orders</a></li> <li role="presentation"><a href="#redeem" aria-controls="redeem" role="tab" data-toggle="tab">Redemptions</a></li> </ul>   <div class="tab-content"> <div role="tabpanel" class="tab-pane active" id="recent"> Recent Orders</div> <div role="tabpanel" class="tab-pane" id="all">All Orders</div> <div role="tabpanel" class="tab-pane" id="redeem">Redemption History</div> </div> </div> To activate (open) a tab, you write this JavaScript code: $('#profileTab li:eq(1) a').tab('show'); As you could guess by looking at the syntax of this JavaScript line that the Bootstrap JS plugins are built on top of jQuery. Thus, the JS code you would write for Bootstrap are also all based on jQuery. Customization Even though Bootstrap offers most (if not all) standard features and functionalities for Responsive Web Design, there might be several cases when you would want to customize and extend the framework. One of the very basic requirements for customization would be to deploy your own branding and color combinations (themes) instead of the Bootstrap default ones. There can be several such use cases where you would want to change the default behavior of the framework. Bootstrap offers very easy and stable ways to customize the platform. When you use the Bootstrap CSS, all the global and fundamental HTML elements automatically become responsive and would properly behave as the client device on which the web page is browsed. The built-in components are also designed to be responsive. As the developer, you shouldn’t be worried about how these advanced components would behave in different devices and client agents. Summary In this article we have discussed the basics of Bootstarp along with a brief explanation on the design patterns and the navigation patterns. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Creating a Responsive Magento Theme with Bootstrap 3 [article]
Read more
  • 0
  • 0
  • 11725

article-image-camera-api
Packt
07 Aug 2015
4 min read
Save for later

The Camera API

Packt
07 Aug 2015
4 min read
In this article by Purusothaman Ramanujam, the author of PhoneGap Beginner's Guide Third Edition, we will look at the Camera API. The Camera API provides access to the device's camera application using the Camera plugin identified by the cordova-plugin-camera key. With this plugin installed, an app can take a picture or gain access to a media file stored in the photo library and albums that the user created on the device. The Camera API exposes the following two methods defined in the navigator.camera object: getPicture: This opens the default camera application or allows the user to browse the media library, depending on the options specified in the configuration object that the method accepts as an argument cleanup: This cleans up any intermediate photo file available in the temporary storage location (supported only on iOS) (For more resources related to this topic, see here.) As arguments, the getPicture method accepts a success handler, failure handler, and optionally an object used to specify several camera options through its properties as follows: quality: This is a number between 0 and 100 used to specify the quality of the saved image. destinationType: This is a number used to define the format of the value returned in the success handler. The possible values are stored in the following Camera.DestinationType pseudo constants: DATA_URL(0): This indicates that the getPicture method will return the image as a Base64-encoded string FILE_URI(1): This indicates that the method will return the file URI NATIVE_URI(2): This indicates that the method will return a platform-dependent file URI (for example, assets-library:// on iOS or content:// on Android) sourceType: This is a number used to specify where the getPicture method can access an image. The following possible values are stored in the Camera.PictureSourceType pseudo constants: PHOTOLIBRARY (0), CAMERA (1), and SAVEDPHOTOALBUM (2): PHOTOLIBRARY: This indicates that the method will get an image from the device's library CAMERA: This indicates that the method will grab a picture from the camera SAVEDPHOTOALBUM: This indicates that the user will be prompted to select an album before picking an image allowEdit: This is a Boolean value (the value is true by default) used to indicate that the user can make small edits to the image before confirming the selection; it works only in iOS. encodingType: This is a number used to specify the encoding of the returned file. The possible values are stored in the Camera.EncodingType pseudo constants: JPEG (0) and PNG (1). targetWidth and targetHeight: These are the width and height in pixels, to which you want the captured image to be scaled; it's possible to specify only one of the two options. When both are specified, the image will be scaled to the value that results in the smallest aspect ratio (the aspect ratio of an image describes the proportional relationship between its width and height). mediaType: This is a number used to specify what kind of media files have to be returned when the getPicture method is called using the Camera.PictureSourceType.PHOTOLIBRARY or Camera.PictureSourceType.SAVEDPHOTOALBUM pseudo constants as sourceType; the possible values are stored in the Camera.MediaType object as pseudo constants and are PICTURE (0), VIDEO (1), and ALLMEDIA (2). correctOrientation: This is a Boolean value that forces the device camera to correct the device orientation during the capture. cameraDirection: This is a number used to specify which device camera has to be used during the capture. The values are stored in the Camera.Direction object as pseudo constants and are BACK (0) and FRONT (1). popoverOptions: This is an object supported on iOS to specify the anchor element location and arrow direction of the popover used on iPad when selecting images from the library or album. saveToPhotoAlbum: This is a Boolean value (the value is false by default) used in order to save the captured image in the device's default photo album. The success handler receives an argument that contains the URI to the file or data stored in the file's Base64-encoded string, depending on the value stored in the encodingType property of the options object. The failure handler receives a string containing the device's native code error message as an argument. Similarly, the cleanup method accepts a success handler and a failure handler. The only difference between the two is that the success handler doesn't receive any argument. The cleanup method is supported only on iOS and can be used when the sourceType property value is Camera.PictureSourceType.CAMERA and the destinationType property value is Camera.DestinationType.FILE_URI. Summary In this article, we looked at the various properties available with the Camera API. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article] iPhone JavaScript: Installing Frameworks [article]
Read more
  • 0
  • 0
  • 4574

article-image-storage-ergonomics
Packt
07 Aug 2015
19 min read
Save for later

Storage Ergonomics

Packt
07 Aug 2015
19 min read
In this article by Saurabh Grover, author of the book Designing Hyper-V Solutions, we will be discussing the last of the basics to get you equipped to create and manage a simple Hyper-V structure. No server environment, physical or virtual, is complete without a clear consideration and consensus over the underlying storage. In this article, you will learn about the details of virtual storage, how to differentiate one from the other, and how to convert one to the other and vice versa. We will also see how Windows Server 2012 R2 removes dependencies on raw device mappings by way of pass-through or iSCSI LUN, which were required for guest clustering. VHDX can now be shared and delivers better results than pass-through disks. There are more merits to VHDX than the former, as it allows you to extend the size even if the virtual machine is alive. Previously, Windows Server 2012 added a very interesting facet for storage virtualization in Hyper-V when it introduced virtual SAN, which adds a virtual host bus adapter (HBA) capability to a virtual machine. This allows a VM to directly view the fibre channel SAN. This in turn allows FC LUN accessibility to VMs and provides you with one more alternative for shared storage for guest clustering. Windows Server 2012 also introduced the ability to utilize the SMI-S capability, which was initially tested on System Center VMM 2012. Windows 2012 R2 carries the torch forward, with the addition of new capabilities. We will discuss this feature briefly in this article. In this article, you will cover the following: Two types of virtual disks, namely VHD and VHDX Merits of using VHDX from Windows 2012 R2 onwards Virtual SAN storage Implementing guest clustering using shared VHDX Getting an insight into SMI-S (For more resources related to this topic, see here.) Virtual storage A virtual machine is a replica of a physical machine in all rights and with respect to the building components, regardless of the fact that it is emulated, resembles, and delivers the same performance as a physical machine. Every computer ought to have storage for the OS or application loading. This condition applies to virtual machines as well. If VMs are serving as independent servers for roles such as domain controller or file server, where the server needs to maintain additional storage apart from the OS, the extended storage can be extended for domain user access without any performance degradation. Virtual machines can benefit from multiple forms of storage, namely VHD/VHDX, which are file-based storage; iSCSI LUNs; pass-through LUNs, which are raw device mappings; and of late, virtual-fibre-channel-assigned LUNs. There have been enhancements to each of these, and all of these options have a straightforward implementation procedure. However, before you make a selection, you should identify the use case according to your design strategy and planned expenditure. In the following section, we will look at the storage choices more closely. VHD and VHDX VHD is the old flag bearer for Microsoft virtualization ever since the days of virtual PC and virtual server. The same was enhanced and employed in early Hyper-V releases. However, as a file-based storage that gets mounted as a normal storage for a virtual machine, VHD had its limitations. VHDX, a new feature addition to Windows Server 2012, was built further upon the limitations of its predecessor and provides greater storage capacity, support for large sector disks, and better protection against corruption. In the current release of Windows Server 2012 R2, VHDX has been bundled with more ammo. VHDX packed a volley of feature enhancements when it was initially launched, and with Windows Server 2012 R2, Microsoft only made it better. If we compare the older, friendlier version of VHD with VHDX, we can draw the following inferences: Size factor: VHD had an upper size limit of 2 TB, while VHDX gives you a humungous maximum capacity of 64 TB. Large disk support: With the storage industry progressing towards 4 KB sector disks from the 512 bytes sector, for applications that still may depend on the older sector format, there are two offerings from the disk alignment perspective: native 4 KB disk and 512e (or 512 byte emulation disks). The operating system, depending on whether it supports native 4 KB disk or not, will either write 4 KB chunks of data or inject 512 bytes of data into a 4 KB sector. The process of injecting 512 bytes into a 4 KB sector is called RMW, or Read-Write-Modify. VHDs are generically supported on 512e disks. Windows Server 2012 and R2 both support native 4 KB disks. However, the VHD driver has a limitation; it cannot open VHD files on physical 4 KB disks. This limitation is checked by enabling VHD to be aligned to 4 KB and RMW ready, but if you are migrating from the older Hyper-V platform, you will need to convert it accordingly. VHDX, on the other hand, is the "superkid". It can be used on all disk forms, namely 512, 512e, and the native 4 KB disk as well, without any RMW dependency. Data corruption safety: In the event of power outages or failures, the possibility of data corruption is reduced with VHDX. Metadata inside the VHDX is updated via a logging process that ensures that the allocations inside VHDX are committed successfully. Offloaded data transfers (ODX): With Windows Server 2012 Hyper-V supporting this feature, data transfer and moving and sizing of virtual disks can be achieved at the drop of a hat, without host server intervention. The basic prerequisite for utilizing this feature is to host the virtual machines on ODX-capable hardware. Thereafter, Windows Server 2012 self-detects and enables the feature. Another important clause is that virtual disks (VHDX) should be attached to the SCSI, not IDE. TRIM/UNMAP: Termed by Microsoft in its documentation as efficiency in representing data, this feature works in tandem with thin provisioning. It adds the ability to allow the underlying storage to reclaim space and maintain it optimally small. Shared VHDX: This is the most interesting feature in the collection released with Windows Server 2012 R2. It made guest clustering (failover clustering in virtual machines) in Hyper-V a lot simpler. With Windows Server 2012, you could set up a guest cluster using virtual fibre channel or iSCSI LUN. However, the downside was that the LUN was exposed to the user of the virtual machine. Shared VHDX proves to be the ideal shared storage. It gives you the benefit of storage abstraction, flexibility, and faster deployment of guest clusters, and it can be stored on an SMB share or a cluster-shared volume (CSV). Now that we know the merits of using VHDX over VHD, it is important to realize that either of the formats can be converted into the other and can be used under various types of virtual disks, allowing users to decide a trade-off between performance and space utilization. Virtual disk types Beyond the two formats of virtual hard disks, let's talk about the different types of virtual hard disks and their utility as per the virtualization design. There are three types of virtual hard disks, namely dynamically expanding, fixed-size, and differencing virtual hard disks: Dynamically expanding: Also called a dynamic virtual hard disk, this is the default type. It gets created when you create a new VM or a new VHD/VHDX. This is Hyper-V's take on thin provisioning. The VHD/VHDX file will start off from a small size and gradually grow up to the maximum defined size for the file as and when chunks of data get appended or created inside the OSE (short for operating system environment) hosted by the virtual disk. This disk type is quite beneficial, as it prevents storage overhead and utilizes as much as required, rather than committing the entire block. However, due to the nature of the virtual storage, as it spawns in size, the actual file gets written in fragments across the Hyper-V CSV or LUN (physical storage). Hence, it affects the performance of the disk I/O operations of the VM. Fixed size: As the name indicates, the virtual disk type commits the same block size on the physical storage as its defined size. In other words, if you have specified a fixed size 1 TB, it will create a 1 TB VHDX file in the storage. The creation of a fixed size takes a considerable amount of time, commits space on the underlying storage, and does allow SAN thin provisioning to reclaim it, somewhat like whitespaces in a database. The advantage of using this type is that it delivers amazing read performance and heavy workloads from SQL, and exchange can benefit from it. Differencing: This is the last of the lot, but quite handy as an option when it comes to quick deployment of virtual machines. This is by far an unsuitable option, unless employed for VMs with a short lifespan, namely pooled VDI (short for virtual desktop infrastructure) or lab testing. The idea behind the design is to have a generic virtual operating system environment (VOSE) in a shut down state at a shared location. The VHDX of the VOSE is used as a parent or root, and thereafter, multiple VMs can be spawned with differencing or child virtual disks that use the generalized OS from the parent and append changes or modifications to the child disk. So, the parent stays unaltered and serves as a generic image. It does not grow in size; on the contrary, the child disk keeps on growing as and when data is added to the particular VM. Unless used for short-lived VMs, the long-running VMs could enter an outage state or may be performance-stricken soon due to the unpredictable growth pattern of a differencing disk. Hence, these should be avoided for server virtual machines without even a second thought. Virtual disk operations Now we will apply all of the knowledge gained about virtual hard disks, and check out what actions and customizations we can perform on them. Creating virtual hard disks This goal can be achieved in different ways: You can create a new VHD when you are creating a new VM, using the New Virtual Machine Wizard. It picks up the VHDX as the default option. You can also launch the New Virtual Hard Disk Wizard from a virtual machine's settings. This can be achieved by PowerShell cmdlets as well:New-VHD You may employ the Disk Management snap-in to create a new VHD as well. The steps to create a VHD here are pretty simple: In the Disk Management snap-in, select the Action menu and select Create VHD, like this: Figure 5-1: Disk Management – Create VHD This opens the Create and Attach Virtual Hard Disk applet. Specify the location to save the VHD at, and fill in Virtual hard disk format and Virtual hard disk type as depicted here in figure 5-2: Figure 5-2: Disk Management – Create and Attach Virtual Hard Disk The most obvious way to create a new VHD/VHDX for a VM is by launching New Virtual Hard Disk Wizard from the Actions pane in the Hyper-V Manager console. Click on New and then select the Hard Disk option. It will take you to the following set of screens: On the Before You Begin screen, click on Next, as shown in this screenshot: Figure 5-3: New Virtual Hard Disk Wizard – Create VHD The next screen is Choose Disk Format, as shown in figure 5-4. Select the relevant virtual hard disk format, namely VHD or VHDX, and click on Next. Figure 5-4: New Virtual Hard Disk Wizard – Virtual Hard Disk Format In the screen for Choose Disk Type, select the relevant virtual hard disk type and click on Next, as shown in the following screenshot: Figure 5-5: New Virtual Hard Disk Wizard– Virtual Hard Disk Type The next screen, as shown in figure 5-6, is Specify Name and Location. Update the Name and Location fields to store the virtual hard disk and click on Next. Figure 5-6: New Virtual Hard Disk Wizard – File Location The Configure Disk screen, shown in figure 5-7, is an interesting one. If needs be, you can convert or copy the content of a physical storage (local, LUN, or something else) to the new virtual hard disk. Similarly, you can copy the content from an older VHD file to the Windows Server 2012 or R2 VHDX format. Then click on Next. Figure 5-7: New Virtual Hard Disk Wizard – Configure Disk On the Summary screen, as shown in the following screenshot, click on Finish to create the virtual hard disk: Figure 5-8: New Virtual Hard Disk Wizard – Summary Editing virtual hard disks There may be one or more reasons for you to feel the need to modify a previously created virtual hard disk to suit a purpose. There are many available options that you may put to use, given a particular virtual disk type. Before you edit a VHDX, it's a good practice to inspect the VHDX or VHD. The Inspect Disk option can be invoked from two locations: from the VM settings under the IDE or SCSI controller, or from the Actions pane of the Hyper-V Manager console. Also, don't forget how to do this via PowerShell: Get-VHD -Path "E:Hyper-VVirtual hard disks1.vhdx" You may now proceed with editing a virtual disk. Again, the Edit Disk option can be invoked in exactly the same fashion as Inspect Disk. When you edit a VHDX, you are presented with four options, as shown in figure 5-9. It may sound obvious, but not all the options are for all the disk types: Compact: This operation is used to reduce or compact the size of a virtual hard disk, though the preset capacity remains the same. A dynamic disk, or differencing disk, grows as data elements are added, though deletion of the content does not automatically reclaim the storage capacity. Hence, a manual compact operation becomes imperative reduce the file size. PowerShell cmdlet can also do this trick, as follows: Optimize-VHD Convert: This is an interesting one, and it almost makes you change your faith. As the name indicates, this operation allows you to convert one virtual disk type to another and vice versa. You can also create a new virtual disk of the desired format and type at your preferred location. The PowerShell construct used to help you achieve the same goal is as follows: Convert-VHD Expand: This operation comes in handy, similar to Extend a LUN. You end up increasing the size of a virtual hard disk, which happens visibly fast for a dynamic disk and a bit slower for its fixed-size cousins. After this action, you have to perform the follow-up action inside the virtual machine to increase the volume size from disk management. Now, for the PowerShell code: Resize-VHD Merge: This operation is disk-type-specific—differencing virtual disks. It allows two different actions. You can either merge the differencing disk with the original parent, or create a new merged VHD out of all the contributing VHDs, namely the parent and the child or the differencing disk. The latter is the preferred way of doing it, as in utmost probability, there would be more than differencing to a parent. In PowerShell, the alternative the cmdlet is this: Merge-VHD Figure 5-9: Edit Virtual Hard Disk Wizard – Choose Action Pass-through disks As the name indicates, these are physical LUNs or hard drives passed on from the Hyper-V hosts, and can be assigned to a virtual machine as a standard disk. A once popular method on older Hyper-V platforms, this allowed the VM to harness the full potential of the raw device bypassing the Hyper-V host filesystem and also not getting restricted by the 2 TB limit of VHDs. A lot has changed over the years, as Hyper-V has matured into a superior virtualization platform and introduced VHDX, which went past the size limitation. with Windows Server 2012 R2 can be used as a shared storage for Hyper-V guest clusters. There are, however, demerits to this virtual storage. When you employ a pass-through disk, the virtual machine configuration file is stored separately. Hence, the snapshotting becomes unknown to this setup. You would not be able to utilize the dynamic disk's or differential disk's abilities here too. Another challenge of using this form of virtual storage is that when using a VSS-based backup, the VSS writer ignores the pass-through and iSCSI LUN. Hence, a complex backup plan has to be implemented by involving a running backup within VM and on the virtualization host separately. The following are steps, along with a few snapshots, that show you how to set up a pass-through disk: Present a LUN to the Hyper-V host. Confirm the LUN in Disk Management and ensure that it stays in the Offline State and as Not Initialized. Figure 5-10: Hyper-V Host Disk Management In Hyper-V Manager, right-click on the VM you wish to assign the pass-through to and select Settings. Figure 5-11: VM Settings – Pass-through disk placement Select SCSI Controller (or IDE in the case of Gen-1 VM) and then select the Physical hard disk option, as shown in the preceding screenshot. In the drop-down menu, you will see the raw device or LUN you wish to assign. Select the appropriate option and click on OK. Check Disk Management within the virtual machine to confirm that the disk has visibility. Figure 5-12: VM Disk Management – Pass-through Assignment Bring it online and initialize. Figure 5-13: VM Disk Management – Pass-through Initialization As always the preceding path can be chalked out with the help of a PowerShell cmdlet: Add-VMHardDiskDrive -VMName VM5 –ControllerType SCSI – ControllerNumber 0 –ControllerLocation 2 –DiskNumber 3 Virtual fibre channel Let's move on to the next big offering in Windows Server 2012 and R2 Hyper-V Server. There was pretty much a clamor for direct FC connectivity to virtual machines, as pass-through disks were supported only via iSCSI LUNs (with some major drawbacks not with FC). Also, needless to say, FC is faster. Enterprises with high-performance workloads relying on the FC SAN refrained from virtualizing or migrating to the cloud. Windows Server 2012 introduced the virtual fibre channel SAN ability in Hyper-V, which extended the HBA (short for host bus adapter) abilities to a virtual machine, granting them a WWN (short for world wide node name) and allowing access to a fibre channel SAN over a virtual SAN. The fundamental principle behind the virtual SAN is the same as the Hyper-V virtual switch, wherein you create a virtual SAN that hooks up to the SAN fabric over the physical HBA of the Hyper-V host. The virtual machine has new synthetic hardware for the last piece. It is called a virtual host bus adapter or vHBA, which gets its own set of WWNs, namely WWNN (node name) and WWPN (port name). The WWN is to the FC protocol as MAC is to the Ethernet. Once the WWNs are identified at the fabric and the virtual SAN, the storage admins can set up zoning and present the LUN to the specific virtual machine. The concept is straightforward, but there are prerequisites that you will need to ensure are in place before you can get down to the nitty-gritty of the setup: One or more Windows Server 2012 or R2 Hyper-V hosts. Hosts should have one or more FC HBAs with the latest drivers, and should support the virtual fibre channel and NPIV. NPIV may be disabled at the HBA level (refer to the vendor documentation prior to deployment). The same can be enabled using command-line utilities or GUI-based such as OneCommand manager, SANSurfer, and so on. NPIV should be enabled on the SAN fabric or actual ports. Storage arrays are transparent to NPIV, but they should support devices that present LUNs. Supported guest operating systems for virtual SAN are Windows 2008, Windows 2008 R2, Windows 2012, and Windows 2012 R2. The virtual fibre channel does not allow boot from SAN, unlike pass-through disks. We are now done with the prerequisites! Now, let's look at two important aspects of SAN infrastructure, namely NPIV and MPIO. N_Port ID virtualization (NPIV) An ANSI T11 standard extension, this feature allows virtualization of the N_Port (WWPN) of an HBA, allowing multiple FC initiators to share a single HBA port. The concept is popular and is widely accepted and promoted by different vendors. Windows Server 2012 and R2 Hyper-V utilizes this feature to the best, wherein each virtual machine partaking in the virtual SAN gets assigned a unique WWPN and access to the SAN over a physical HBA spawning its own N_Port. Zoning follows next, wherein the fabric can have the zone directed to the VM WWPN. This attribute leads to a very small footprint, and thereby, easier management and operational and capital expenditure. Summary It is going to be quite a realization that we have covered almost all the basic attributes and aspects required for a simple Windows Server 2012 R2 Hyper-V infrastructure setup. If we revise the contents, we will notice this: we started off in this article by understanding and defining the purpose of virtual storage, and what the available options are for storage to be used with a virtual machine. We reviewed various virtual hard disk types, formats, and associated operations that may be required to customize a particular type or modify it accordingly. We recounted how the VHDX format is superior to its predecessor VHD and which features were added with the latest Window Server releases, namely 2012 and 2012 R2. We discussed shared VHDX and how it can be used as an alternative to the old-school iSCSI or FC LUN as a shared storage for Windows guest clustering. Pass-through disks are on their way out, and we all know the reason why. The advent of the virtual fibre channel with Windows Server 2012 has opened the doors for virtualization of high-performance workloads relying heavily on FC connectivity, which until now was a single reason and enough of a reason to decline consolidation of these workloads. Resources for Article: Further resources on this subject: Hyper-V Basics [article] Getting Started with Hyper-V Architecture and Components [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 7174
article-image-detecting-touchscreen-gestures
Packt
06 Aug 2015
18 min read
Save for later

Detecting Touchscreen Gestures

Packt
06 Aug 2015
18 min read
In this article by Kyle Mew author of the book, Android 5 Programming by Example, we will learn how to: Add a GestureDetector to a view Add an OnTouchListener and an OnGestureListener Detect and refine fling gestures Use the DDMS Logcat to observe the MotionEvent class Edit the Logcat filter configuration Simplify code with a SimpleOnGestureListener Add a GestureDetector to an Activity Edit the Manifest to control launch behavior Hide UI elements Create a splash screen Lock screen orientation (For more resources related to this topic, see here.) Adding a GestureDetector to a view Together, view.GestureDetector and view.View.OnTouchListener are all that are required to provide our ImageView with gesture functionality. The listener contains an onTouch() callback that relays each MotionEvent to the detector. We are going to program the large ImageView so that it can display a small gallery of related pictures that can be accessed by swiping left or right on the image. There are two steps to this task as, before we implement our gesture detector, we need to provide the data for it to work on. Adding the gallery data As this app is for demonstration and learning purposes, and so we can progress as quickly as possible, we will only provide extra images for one or two of the ancient sites in the project. Here is how it's done: Open the Ancient Britain project. Open the MainData.java file. Add the following arrays: static Integer[] hengeArray = {R.drawable.henge_large, R.drawable.henge_2, R.drawable.henge_3, R.drawable.henge_4}; static Integer[] horseArray = {}; static Integer[] wallArray = {R.drawable.wall_large, R.drawable.wall_2}; static Integer[] skaraArray = {}; static Integer[] towerArray = {}; static Integer[][] galleryArray = {hengeArray, horseArray, wallArray, skaraArray, towerArray}; Either download the project files from the Packt website or find four of your own images (around 640 x 480 px). Name them henge_2, henge_3, henge_4, and wall_2 and place them in your res/drawable directory. This is all very straightforward, and the code that will accompany it allows you to have individual arrays of any length. This is all we need to add to our gallery data. Now, we need to code our GestureDetector and OnTouchListener. Adding the GestureDetector Along with the OnTouchListener that we will define for our ImageView, the GestureDetector has its own listeners. Here we will use GestureDetector.OnGestureListener to detect a fling gesture and collect the MotionEvent that describe it. Follow these steps to program your ImageView to respond to fling gestures: Open the DetailActivity.java file. Declare the following class fields: private static final int MIN_DISTANCE = 150; private static final int OFF_PATH = 100; private static final int VELOCITY_THRESHOLD = 75; private GestureDetector detector; View.OnTouchListener listener; private int ImageIndex; In the onCreate() method assigns both the detector and listener like this: detector = new GestureDetector(this, new GalleryGestureDetector()); listener = new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { return detector.onTouchEvent(event); } }; Beneath this, add the following line: ImageIndex = 0; Beneath the line detailImage = (ImageView) findViewById(R.id.detail_image);, add the following line: detailImage.setOnTouchListener(listener); Create the following inner class: class GalleryGestureDetector implements GestureDetector.OnGestureListener { } Before dealing with the errors this generates, add the following field to the class: private int item; { item = MainActivity.currentItem; } Click anywhere on the line registering the error and press Alt + Enter. Then select Implement Methods, making sure that you have the Copy JavaDoc and Insert @Override boxes checked. Complete the onDown() method like this: @Override public boolean onDown(MotionEvent e) { return true; } Fill in the onShowPress() method: @Override public void onShowPress(MotionEvent e) { detailImage.setElevation(4); } Then fill in the onFling() method: @Override public boolean onFling(MotionEvent event1, MotionEvent event2, float velocityX, float velocityY) { if (Math.abs(event1.getY() - event2.getY()) > OFF_PATH) return false; if (MainData.galleryArray[item].length != 0) { // Swipe left if (event1.getX() - event2.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex++; if (ImageIndex == MainData.galleryArray[item].length) ImageIndex = 0; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } else { // Swipe right if (event2.getX() - event1.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex--; if (ImageIndex < 0) ImageIndex = MainData.galleryArray[item].length - 1; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } } } detailImage.setElevation(0); return true; } Test the project on an emulator or handset. The process of gesture detection in the preceding code begins when the OnTouchListener listener's onTouch() method is called. It then passes that MotionEvent to our gesture detector class, GalleryGestureDetector, which monitors motion events, sometimes stringing them together and timing them until one of the recognized gestures is detected. At this point, we can enter our own code to control how our app responds as we did here with the onDown(), onShowPress(), and onFling() callbacks. It is worth taking a quick look at these methods in turn. It may seem, at the first glance, that the onDown() method is redundant; after all, it's the fling gesture that we are trying to catch. In fact, overriding the onDown() method and returning true from it is essential in all gesture detections as all the gestures begin with an onDown() event. The purpose of the onShowPress() method may also appear unclear as it seems to do a little more than onDown(). As the JavaDoc states, this method is handy for adding some form of feedback to the user, acknowledging that their touch has been received. The Material Design guidelines strongly recommend such feedback and here we have raised the view's elevation slightly. Without including our own code, the onFling() method will recognize almost any movement across the bounding view that ends in the user's finger being raised, regardless of direction or speed. We do not want very small or very slow motions to result in action; furthermore, we want to be able to differentiate between vertical and horizontal movement as well as left and right swipes. The MIN_DISTANCE and OFF_PATH constants are in pixels and VELOCITY_THRESHOLD is in pixels per second. These values will need tweaking according to the target device and personal preference. The first MotionEvent argument in onFling() refers to the preceding onDown() event and, like any MotionEvent, its coordinates are available through its getX() and getY() methods. The MotionEvent class contains dozens of useful classes for querying various event properties—for example, getDownTime(), which returns the time in milliseconds since the current onDown() event. In this example, we used GestureDetector.OnGestureListener to capture our gesture. However, the GestureDetector has three such nested classes, the other two being SimpleOnGestureListener and OnDoubleTapListener. SimpleOnGestureListener provides a more convenient way to detect gestures as we only need to implement those methods that relate to the gestures we are interested in capturing. We will shortly edit our Activity so that it implements the SimpleOnGestureListener instead, allowing us to tidy our code and remove the four callbacks that we do not need. The reason for taking this detour, rather than applying the simple listener to begin with, was to get to see all of the gestures available to us through a gesture listener and demonstrate how useful JavaDoc comments can be, particularly if we are new to the framework. For example, take a look at the following screenshot: Another very handy tool is the Dalvik Debug Monitor Server (DDMS), which allows us to see what is going on inside our apps while they are running. The workings of our gesture listener are a good place to do this as most of its methods operate invisibly. Viewing gesture activity with DDMS To view the workings of our OnGestureListener with DDMS, we need to first create a tag to identify our messages and then a filter to view them. The following steps demonstrate how to do this: Open the DetailActivity.java file. Declare the following constant: private static final String DEBUG_TAG = "tag"; Add the following line inside the onDown() method: Log.d(DEBUG_TAG, "onDown"); Add the line Log.d(DEBUG_TAG, "onShowPress"); to the onShowPress() method and do the same for each of our OnGestureDetector methods. Add the following lines to the appropriate clauses in onFling(): Log.d(DEBUG_TAG, "left"); Log.d(DEBUG_TAG, "right"); Open the Android DDMS pane from the Android tab at the bottom of the window or by pressing Alt + 6. If logcat is not visible, it can be opened with the icon to the right of the top-right drop-down menu. Click on this drop-down menu and select Edit Filter Configuration. Complete the dialog as shown in the following screenshot: You can now run the project on a handset or emulator and view, in the Logcat, which gestures are being triggered and how. Your output should resemble the one here: 02-17 14:39:00.990 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:01.039 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp 02-17 14:39:03.503 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:03.601 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onShowPress 02-17 14:39:04.101 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onLongPress 02-17 14:39:10.484 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:10.541 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.091 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.232 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onFling 02-17 14:39:11.680 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ right 02-17 14:39:01.039   1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp DDMS is an invaluable tool when it comes to debugging our apps and seeing what is going on beneath the hood. Once a Log Tag has been defined in the code, we can then create a filter for it so that we see only the messages we are interested in. The Log class contains several methods to report information based on its level of importance. We used Log.d, which stands for debug. All these methods work with the same two parameters: Log.[method](String tag, String message). The full list of these methods is as follows: Log.v: Verbose Log.d: Debug Log.i: Information Log.w: Warning Log.e: Error Log.wtf: Unexpected error It is worth noting that most debug messages will be ignored during the packaging for distribution except for the verbose messages; thus, it is essential to remove them before your final build. Having seen a little more of the inner workings of our gesture detector and listener, we can now strip our code of unused methods by implementing GestureDetector.SimpleOnGestureListener. Implementing a SimpleOnGestureListener It is very simple to convert our gesture detector from one class of listener to another. All we need to do is change the class declaration and delete the unwanted methods. To do this, perform the following steps: Open the DetailActivity file. Change the class declaration for our gesture detector class to the following: class GalleryGestureDetector extends GestureDetector.SimpleOnGestureListener { Delete the onShowPress(), onSingleTapUp(), onScroll(), and onLongPress() methods. This is all you need to do to switch to the SimpleOnGestureListener. We have now successfully constructed and edited a gesture detector to allow the user to browse a series of images. You will have noticed that there is no onDoubleTap() method in the gesture listener. Double-taps can, in fact, be handled with the third GestureDetector listener, OnDoubleTapListener, which operates in a very similar way to the other two. However, Google, in its UI guidelines, recommends that a long press should be used instead, whenever possible. Before moving on to multitouch events, we will take a look at how to attach a GestureDetector listener to an entire Activity by adding a splash screen to our project. In the process, we will also see how to create a Full-Screen Activity and how to edit the Maniftest file so that our app launches with the splash screen. Adding a GestureDetector to an Activity The method we have employed so far allows us to attach a GestureDetector listener to any view or views and this, of course, applies to ViewGroups such as Layouts. There are times when we may want to detect gestures to be applied to the whole screen. For this purpose, we will create a splash screen that can be dismissed with a long press. There are two things we need to do before implementing the gesture detector: creating a layout and editing the Manifest file so that the app launches with our splash screen. Designing the splash screen layout The main difference between processing gestures for a whole Activity and an individual widget, is that we do not need an OnTouchListener as we can override the Activity's own onTouchEvent(). Here is how it is done: Create a new Blank Activity from the Project Explorer context menu called SplashActivity.java. The Activity wizard should have created an associated XML layout called activity_splash.xml. Open this and view it using the Text tab. Remove all the padding properties from the root layout so that it looks similar to this: <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.example.kyle.ancientbritain .SplashActivity"> Here we will need an image to act as the background for our splash screen. If you have not downloaded the project files from the Packt website, find an image, roughly of the size and aspect of your target device's screen, upload it to the project drawable folder, and call it splash. The file I used is 480 x 800 px. Remove the TextView that the wizard placed inside the layout and replace it with this ImageView: <ImageView android:id="@+id/splash_image" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/splash"/> Create a TextView beneath this, such as the following: <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" android:layout_marginBottom="40dp" android:gravity="center_horizontal" android:textAppearance="?android:attr/ textAppearanceLarge" android:textColor="#fffcfcbd"/> Add the following text property: android:text="Welcome to <b>Ancient Britain</b>npress and hold nanywhere on the screennto start" To save time adding string resources to the strings.xml file, enter a hardcoded string such as the preceding one and heed the warning from the editor to have the string extracted for you like this: There is nothing in this layout that we have not encountered before. We removed all the padding so that our splash image will fill the layout; however, you will see from the preview that this does not appear to be the case. We will deal with this next in our Java code, but we need to edit our Manifest first so that the app gets launched with our SplashActivity. Editing the Manifest It is very simple to configure the AndroidManifest file so that an app will get launched with whichever Activity we choose; the way it does so is with an intent. While we are editing the Manifest, we will also configure the display to fill the screen. Simply follow these steps: Open the res/values-v21/styles.xml file and add the following style: <style name="SplashTheme" parent="android:Theme.Material. NoActionBar.Fullscreen"> </style> Open the AndroidManifest.xml file. Cut-and-paste the <intent-filter> element from MainActivity to SplashActivity. Include the following properties so that the entire <activity> node looks similar to this: <activity android:name=".SplashActivity" android:theme="@style/SplashTheme" android:screenOrientation="portrait" android:configChanges="orientation|screenSize" android:label="Old UK" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> We have encountered themes and styles before and, here, we took advantage of a built-in theme designed for full screen activities. In many cases, we might have designed a landscape layout here but, as is often the case with splash screens, we locked the orientation with the android:screenOrientation property. The android:configChanges line is not actually needed here, but is included as it is useful to know about it. Configuring any attribute such as this prevents the system from automatically reloading the Activity whenever the device is rotated or the screen size changed. Instead of the Activity restarting, the onConfigurationChanged() method is called. This was not needed here as the screen size and orientation were taken care of in the previous lines of code and this line was only included as a point of interest. Finally, we changed the value of android:label. You may have noticed that, depending on the screen size of the device you are using, the name of our app is not displayed in full on the home screen or apps drawer. In such cases, when you want to use a shortened name for your app, it can be inserted here. With everything else in place, we can get on with adding our gesture detector. This is not dissimilar to the way we did this before but, this time, we will apply the detector to the whole screen and will be listening for a long press, rather than a fling. Adding the GestureDetector Along with implementing a gesture detector for the entire Activity here, we will also take the final step in configuring our splash screen so that the image fills the screen, but maintains its aspect ratio. Follow these steps to complete the app splash screen. Open the SplashActivity file. Declare a GestureDetector as we did in the earlier exercise: private GestureDetector detector; In the onCreate() method, assign and configure our splash image and gesture detector like this: ImageView imageView = (ImageView) findViewById(R.id.splash_image); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); detector = new GestureDetector(this, new SplashListener()); Now, override the Activity's onTouchEvent() like this: @Override public boolean onTouchEvent(MotionEvent event) { this.detector.onTouchEvent(event); return super.onTouchEvent(event); } Create the following SimpleOnGestureListener class: private class SplashListener extends GestureDetector. SimpleOnGestureListener { @Override public boolean onDown(MotionEvent e) { return true; } @Override public void onLongPress(MotionEvent e) { startActivity(new Intent(getApplicationContext(), MainActivity.class)); } } Build and run the app on your phone or an emulator. The way a gesture detector is implemented across an entire Activity should be familiar by this point, as should the capturing of the long press event. The ImageView.setScaleType(ImageView.ScaleType) method is essential here; it is a very useful method in general. The CENTER_CROP constant scales the image to fill the view while maintaining the aspect ratio, cropping the edges when necessary. There are several similar ScaleTypes, such as CENTER_INSIDE, which scales the image to the maximum size possible without cropping it, and CENTER, which does not scale the image at all. The beauty of CENTER_CROP is that it means that we don't have to design a separate image for every possible aspect ratio on the numerous devices our apps will end up running on. Provided that we make allowances for very wide or very narrow screens by not including essential information too close to the edges, we only need to provide a handful of images of varying pixel densities to maintain the image quality on large, high-resolution devices. The scale type of ImageView can be set from within XML with android:scaleType="centerCrop", for example. You may have wondered why we did not use the built-in Full-Screen Activity from the wizard; we could easily have done so. The template code the wizard creates for a Full-Screen Activity provides far more features than we needed for this exercise. Nevertheless, the template is worth taking a look at, especially if you want a fullscreen that brings the status bar and other components into view when the user interacts with the Activity. That brings us to the end of this article. Not only have we seen how to make our apps interact with touch events and gestures, but also how to send debug messages to the IDE and make a Full-Screen Activity. Summary We began this article by adding a GestureDetector to our project. We then edited it so that we could filter out meaningful touch events (swipe right and left, in this case). We went on to see how the SimpleOnGestureListener can save us a lot of time when we are only interested in catching a subset of the recognized gestures. We also saw how to use DDMS to pass debug messages during runtime and how, through a combination of XML and Java, the status and action bars can be hidden and the entire screen be filled with a single view or view group. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [Article] Saying Hello to Unity and Android [Article] Testing with the Android SDK [Article]
Read more
  • 0
  • 0
  • 7934

article-image-rundown-example
Packt
06 Aug 2015
10 min read
Save for later

Rundown Example

Packt
06 Aug 2015
10 min read
In this article by Miguel Oliveira, author of the book Microsoft System Center Orchestrator 2012 R2 Essentials, we will learn to get started on the creational process. We will be able to driven on how to address and connect all the pieces together in order to successfully create a Runbook. (For more resources related to this topic, see here.) Runbook for Active Directory User Account Provisioning Now, for this Runbook, we've been challenged by our HR department to come out with a solution for them to be able to create new user accounts for recently joined employees. The request was specifically drawn with the target for them (HR) to be able to: Provide the first and last name Provide the department name Get that user added to the proper department group and get all the information of the user Send the newly created account to the IT department to provide a machine, a phone, and an e-mail address With these requirements at the back of our heads, let's see which activities we need to get into our Runbook. I'll place these in steps for this example, so it's easy to follow: Data input: So, we definitely need an activity to allow the HR to feed the information into the Runbook. For this, we can use the Initialize Data activity (Runbook control category), or we could work along with a monitored file and read the data from a line, or even from a SharePoint list. But to keep it simple for now, let's use the Initialize Data. Data processing: In here, the idea would be to retrieve the Department given by the HR and process it to retrieve the group (the Get Group activity from the Active Directory category) and include our user (the Add User To Group activity from the Active Directory category) into the group we've retrieved; but in between, we'll need to create the user account (Create User activity from the Active Directory category), and generate a password (the Generate Random Text activity from the Utilities category). Data output: At the very end of all this, send an e-mail (the Send Email activity from the Email category) back to HR with the account information and status of its creation and inform our IT department (for security reasons) too about the account that has been created. We're also going to closely watch for errors with a few activities that will show us whether an error occurs. Let's see the look of this Runbook from a structured point (and actually almost how it's going to look in the end) and we'll detail the activities and options within them step by step from there. Here's the aspect of the Runbook structured with the activities properly linked between them allowing the data bus to flow and transport the published data from the beginning to the end: As described in the steps, we start with an Initialize Data activity in which we're going to request some inputs from the person executing the Runbook. To create a user, we'll need his First Name and Last Name and also the Department. For that, we'll fill in the following information in the Fetch User Details activity seen in the previous screenshot. For the sake of avoiding errors, the HR department should have a proper list of departments that we know will translate into a proper group in the upcoming activities. After filling the information, the processing of the information begins and with it, our automation process that will find the group for that department, create our user account, set a password, change password on the first login, add the user to the group, and enable the account. For that, we'll start with the Get Group activity in which we'll fill in the following: Set up the proper configuration in the Get Group Properties window for the Active Directory Domain in which you'll want this to execute, and in the Filters options, set to filter Sam Account Name of the group as the Department filled by the HR department. Now we'll set another prerequisite to create the account—the password! For this, we'll get the Generate Random Text activity and set it with the following parameters: These values should be set to accordingly accommodate your existing security policy and minimum password requirements for your domain. These previous activities are all we need to have the necessary values to proceed with the account creation by using the Create User activity. These should be the parameters filled in. All of these parameters are actually being retrieved from the Published Data from the last activities. As the list is long, we'll detail them here for your better understanding. Everything that's between {} is Published Data: Common Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} Department: {Display Name from "Get Group"} Display Name: {First Name from "Fetch User Details"} {Last Name from "Fetch User Details"} First Name: {First Name from "Fetch User Details"} Last Name: {Last Name from "Fetch User Details"} Password: {Random text from "Generate Random Text"} User Must Change Password: True SAM Account Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"} User Principal Name: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.local Email: {First Name from "Fetch User Details"}.{Last Name from "Fetch User Details"}@test.com Manager: {Managed By from "Get Group"} As said previously, most of the data comes from the Published Data and we've created subscriptions in all these fields to retrieve it. The only two fields that have data different from Published Data are the User Must Change Password, User Principal Name (UPN), and Email. The User Must Change Password is a Boolean field that will display only Yes or No, and in the UPN and e-mail we've added the domain information (@test.local and @test.com) to the Published Data. Depending on the Create User activity's output, it will trigger a different activity. For now, let's assume that the activity returns a success on the execution, this will make the Runbook follow the smart link that goes on with the Get User activity. The Get User activity will retrieve all the information concerning the newly created user account that will be useful for the next activities down the line. In order to retrieve the proper information, we'll need to configure the following in the Filters area within the activity: You'll need to add a filter, selecting Sam Account Name and Relation as Equals for Value under the subscribed data from Sam Account Name that comes out of the Create User activity. From here, we'll link with the activity that Add User to Group (here renamed to Add User to Department) and within that activity we're going to specify the group and the user so the activity can add the user into the group. It should look exactly like the screenshot that follows: We'll once again assume that everything's running as expected and prepare our next activity that is to enable user account and for this one, we'll use the Enable User activity. The configuration of the activity can be seen in the next screenshot: Once again, we'll get the information out of the Published Data and feed it into the activity. After this activity is completed, we're going to log the execution and information output into the platform with the Send Platform Event activity so we can see any necessary information available from the execution. Here is a sample of the configuration for the message output: To get the Details text box expanded this way, right-click on it and select Expand… from the menu, then you can format and include the data that you feel is more important for you to see it. Then we'll send an e-mail for the HR team with the account creation details so they can communicate to the newly arrived employee and another e-mail for the IT department only with the account name and the department (plus the group name) for security reasons. Here are the samples of these two activities, starting with the HR e-mail: Let's go point by point on this configuration sample. In the Details section, we've settled the following: Subject: Account {Sam Account Name from "Get User"} Created Recipients: to: hr.dept@test.com Message: The message description is given in the following screenshot: Message option that consists of choosing the Priority of the message (high, normal, or low), and set the necessary SMTP authentication parameters (account, password, and domain) so you can send the message through your e-mail service. If you have an application e-mail service relay, you can leave the SMTP authentication without any configuration. In connect Connect option, you'll find the place to configure the e-mail address that you want the user to see and the SMTP connection (server, port, and SSL) through which you'll send your messages. Now our Send Email IT activity will be more or less the same, with the exception for the destination and the message itself. It should be something a little more or less like the following screenshot: By now you've got the idea and you're pumped to create new Runbooks, but we still have to do some error control on some of these tasks; although they're chained, if one fails, everything fails. So for this Runbook, we'll create error control on two tasks that if we observe well, are more or less the only two that can fail! One is the Create User Account activity, which can fail due to the user account existing or by some issue with privileges on its creation. The other is Add User To Department that might fail to add the user into the group for some reason. So for this, we'll create two notification activities called Send Event and Log Message that we'll rename to User Account Error and Group Error respectively. If we look into the User Account Error activity, we'll set something more or less like the following screenshot: A quick explanation of the settings is as follows: Computer: This is the computer to which Windows Event Viewer we're going to write the event into. In this case, we'll concentrate over our Management Server, but you might have a logging server for this. Message: The message gets logged into the windows event viewer. Here, we can subscribe for the error data coming out of the last activity executed. Severity: This is usually an Error. You can set Information or Warning if you are deploying these activities to keep a track on each given step. So for our Group Error Properties the philosophy will be the same. Now that we are all set, we'll need to work our smart links so that they can direct the Runbook execution flow into the following activity depending on the previous activity output (success or error). In the end, your Runbook should look a little bit more like this: That's it for the Runbook for Active Directory User Account Provisioning. We'll now speed up a little bit more on the other Runbooks as you'll have a much clearer understanding after this first sample. Summary We've seen one of the Runbook samples these Runbooks should serve as the base for real case scenarios in the environment and help us in the creativity process and also to better understand the configurations necessary on each activity in order to proceed successfully. Resources for Article: Further resources on this subject: Unpacking System Center 2012 Orchestrator [article] Working with VMware Infrastructure [article] Unboxing Docker [article]
Read more
  • 0
  • 0
  • 1783
Modal Close icon
Modal Close icon