Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-creating-classes
Packt
11 Jul 2016
17 min read
Save for later

Creating Classes

Packt
11 Jul 2016
17 min read
In this article by William Sherif and Stephen Whittle, authorsof the book Unreal Engine 4 Scripting with C ++ Cookbook, we will discuss about how to create C++ classes and structs that integrate well with the UE4 Blueprints Editor. These classes are graduated versions of the regular C++ classes, and are called UCLASS. (For more resources related to this topic, see here.) A UCLASS is just a C++ class with a whole lot of UE4 macro decoration on top. The macros generate additional C++ header code that enables integration with the UE4 Editor itself. Using UCLASS is a great practice. The UCLASS macro, if configured correctly, can possibly make your UCLASS Blueprintable. The advantage of making your UCLASS Blueprintable is that it can enable your custom C++ objects to have Blueprints visually-editable properties (UPROPERTY) with handy UI widgets such as text fields, sliders, and model selection boxes. You can also have functions (UFUNCTION) that are callable from within a Blueprints diagram. Both of these are shown in the following images: On the left, two UPROPERTY decorated class members (a UTexture reference and an FColor) show up for editing in a C++ class's Blueprint. On the right, a C++ function GetName marked as BlueprintCallable UFUNCTION shows up as callable from a Blueprints diagram. Code generated by the UCLASS macro will be located in a ClassName.generated.h file, which will be the last #include required in your UCLASS header file, ClassName.h. The following are the topics that we will cover in this article: Making a UCLASS – Deriving from UObject Creating a user-editable UPROPERTY Accessing a UPROPERTY from Blueprints Specifying a UCLASS as the type of a UPROPERTY Creating a Blueprint from your custom UCLASS Making a UCLASS – Deriving from UObject When coding with C++, you can have your own code that compiles and runs as native C++ code, with appropriate calls to new and delete to create and destroy your custom objects. Native C++ code is perfectly acceptable in your UE4 project as long as your new and delete calls are appropriately paired so that no leaks are present in your C++ code. You can, however, also declare custom C++ classes, which behave like UE4 classes, by declaring your custom C++ objects as UCLASS. UCLASS use UE4's Smart Pointers and memory management routines for allocation and deallocation according to Smart Pointer rules, can be loaded and read by the UE4 Editor, and can optionally be accessed from Blueprints. Note that when you use the UCLASS macro, your UCLASS object's creation and destruction must be completely managed by UE4: you must use ConstructObject to create an instance of your object (not the C++ native keyword new), and call UObject::ConditionalBeginDestroy() to destroy the object (not the C++ native keyword delete). Getting ready In this recipe, we will outline how to write a C++ class that uses the UCLASS macro to enable managed memory allocation and deallocation as well as to permit access from the UE4 Editor and Blueprints. You need a UE4 project into which you can add new code to use this recipe. How to do it... From your running project, select File | Add C++ Class inside the UE4 Editor. In the Add C++ Class dialog that appears, go to the upper-right side of the window, and tick the Show All Classes checkbox. Creating a UCLASS by choosing to derive from the Object parent class. UObject is the root of the UE4 hierarchy. You must tick the Show All Classes checkbox in the upper-right corner of this dialog for the Object class to appear in the list view. Select Object (top of the hierarchy) as the parent class to inherit from, and then click on Next. Note that although Object will be written in the dialog box, in your C++ code, the C++ class you will deriving from is actually UObject with a leading uppercased U. This is the naming convention of UE4: UCLASS deriving from UObject (on a branch other than Actor) must be named with a leading U. UCLASS deriving from Actor must be named with a leading A. C++ classes (that are not UCLASS) deriving from nothing do not have a naming convention, but can be named with a leading F (for example, FAssetData), if preferred. Direct derivatives of UObject will not be level placeable, even if it contains visual representation elements like UStaticMeshes. If you want to place your object inside a UE4 level, you must at least derive from the Actor class or beneath it in the inheritance hierarchy. This article's example code will not be placeable in the level, but you can create and use Blueprints based on the C++ classes that we write in this article in the UE4 Editor. Name your new Object derivative something appropriate for the object type that you are creating. I call mine UserProfile. This comes off as UUserObject in the naming of the class in the C++ file that UE4 generates to ensure that the UE4 conventions are followed (C++ UCLASS preceded with a leading U). We will use the C++ object that we've created to store the Name and Email of a user that plays our game. Go to Visual Studio, and ensure your class file has the following form: #pragma once #include "Object.h" // For deriving from UObject #include "UserProfile.generated.h" // Generated code // UCLASS macro options sets this C++ class to be // Blueprintable within the UE4 Editor UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() }; Compile and run your project. You can now use your custom UCLASS object inside Visual Studio, and inside the UE4 Editor. See the following recipes for more details on what you can do with it. How it works… UE4 generates and manages a significant amount of code for your custom UCLASS. This code is generated as a result of the use of the UE4 macros such as UPROPERTY, UFUNCTION, and the UCLASS macro itself. The generated code is put into UserProfile.generated.h. You must #include the UCLASSNAME.generated.h file with the UCLASSNAME.h file for compilation to succeed. Without including the UCLASSNAME.generated.h file, compilation would fail. The UCLASSNAME.generated.h file must be included as the last #include in the list of #include in UCLASSNAME.h. Right Wrong #pragma once   #include "Object.h" #include "Texture.h" // CORRECT: .generated.h last file #include "UserProfile.generated.h" #pragma once   #include "Object.h" #include "UserProfile.generated.h" // WRONG: NO INCLUDES AFTER // .GENERATED.H FILE #include "Texture.h" The error that occurs when a UCLASSNAME.generated.h file is not included last in a list of includes is as follows: >> #include found after .generated.h file - the .generated.h file should always be the last #include in a header There's more… There are a bunch of keywords that we want to discuss here, which modify the way a UCLASS behaves. A UCLASS can be marked as follows: Blueprintable: This means that you want to be able to construct a Blueprint from the Class Viewer inside the UE4 Editor (when you right-click, Create Blueprint Class… becomes available). Without the Blueprintable keyword, the Create Blueprint Class… option will not be available for your UCLASS, even if you can find it from within the Class Viewer and right-click on it. The Create Blueprint Class… option is only available if you specify Blueprintable in your UCLASS macro definition. If you do not specify Blueprintable, then the resultant UCLASS will not be Blueprintable. BlueprintType:  Using this keyword implies that the UCLASS is usable as a variable from another Blueprint. You can create Blueprint variables from the Variables group in the left-hand panel of any Blueprint's EventGraph. If NotBlueprintType is specified, then you cannot use this Blueprint variable type as a variable in a Blueprints diagram. Right-clicking the UCLASS name in the Class Viewer will not show Create Blueprint Class… in its context menu. Any UCLASS that have BlueprintType specified can be added as variables to your Blueprint class diagram's list of variables. You may be unsure whether to declare your C++ class as a UCLASS or not. It is really up to you. If you like smart pointers, you may find that UCLASS not only make for safer code, but also make the entire code base more coherent and more consistent. See also To add additional programmable UPROPERTY to the Blueprints diagrams, see the section on Creating a user-editable UPROPERTY, further in the article. Creating a user-editable UPROPERTY Each UCLASS that you declare can have any number of UPROPERTY declared for it within it. Each UPROPERTY can be a visually editable field, or some Blueprints accessible data member of the UCLASS. There are a number of qualifiers that we can add to each UPROPERTY, which change the way it behaves from within the UE4 Editor, such as EditAnywhere (screens from which the UPROPERTY can be changed), and BlueprintReadWrite (specifying that Blueprints can both read and write the variable at any time in addition to the C++ code being allowed to do so). Getting ready To use this recipe, you should have a C++ project into which you can add C++ code. In addition, you should have completed the preceding recipe, Making a UCLASS – Deriving from UObject. How to do it... Add members to your UCLASS declaration as follows: UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() public: UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) float Armor; UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) float HpMax; }; Create a Blueprint of your UObject class derivative, and open the Blueprint in the UE4 editor by double-clicking it from the object browser. You can now specify values in Blueprints for the default values of these new UPROPERTY fields. Specify per-instance values by dragging and dropping a few instances of the Blueprint class into your level, and editing the values on the object placed (by double-clicking on them). How it works… The parameters passed to the UPROPERTY() macro specify a couple of important pieces of information regarding the variable. In the preceding example, we specified the following: EditAnywhere: This means that the UPROPERTY() macro can be edited either directly from the Blueprint, or on each instance of the UClass object as placed in the game level. Contrast this with the following: EditDefaultsOnly: The Blueprint's value is editable, but it is not editable on a per-instance basis EditInstanceOnly: This would allow editing of the UPROPERTY() macro in the game-level instances of the UClass object, and not on the base blueprint itself BlueprintReadWrite: This indicates that the property is both readable and writeable from Blueprints diagrams. UPROPERTY() with BlueprintReadWrite must be public members, otherwise compilation will fail. Contrast this with the following: BlueprintReadOnly: The property must be set from C++ and cannot be changed from Blueprints Category: You should always specify a Category for your UPROPERTY(). The Category determines which submenu the UPROPERTY() will appear under in the property editor. All UPROPERTY() specified under Category=Stats will appear in the same Stats area in the Blueprints editor. See also A complete UPROPERTY listing is located at https://docs.unrealengine.com/latest/INT/Programming/UnrealArchitecture/Reference/Properties/Specifiers/index.html. Accessing a UPROPERTY from Blueprints Accessing a UPROPERTY from Blueprints is fairly simple. The member must be exposed as a UPROPERTY on the member variable that you want to access from your Blueprints diagram. You must qualify the UPROPERTY in your macro declaration as being either BlueprintReadOnly or BlueprintReadWrite to specify whether you want the variable to be either readable (only) from Blueprints, or even writeable from Blueprints. You can also use the special value BlueprintDefaultsOnly to indicate that you only want the default value (before the game starts) to be editable from the Blueprints editor. BlueprintDefaultsOnly indicates the data member cannot be edited from Blueprints at runtime. How to do it... Create some UObject-derivative class, specifying both Blueprintable and BlueprintType, such as the following: UCLASS( Blueprintable, BlueprintType ) class CHAPTER2_API UUserProfile : public UObject { GENERATED_BODY() public: UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Stats) FString Name; }; The BlueprintType declaration in the UCLASS macro is required to use the UCLASS as a type within a Blueprints diagram. Within the UE4 Editor, derive a Blueprint class from the C++ class, as shown in Creating a Blueprint from your custom UCLASS. Create an instance of your Blueprint-derived class in the UE4 Editor by dragging an instance from the Content Browser into the main game world area. It should appear as a round white sphere in the game world unless you've specified a model mesh for it. In a Blueprints diagram which allows function calls (such as the Level Blueprint, accessible via Blueprints | Open Level Blueprint), try printing the Name property of your Warrior instance, as seen in the following screenshot: Navigating Blueprints diagrams is easy. Right-Click + Drag to pan a Blueprints diagram; Alt + Right-Click + Drag to zoom. How it works… UPROPERTY are automatically written Get/Set methods for UE4 classes. They must not be declared as private variables within the UCLASS, however. If they are not declared as public or protected members, you will get a compiler error of the form: >> BlueprintReadWrite should not be used on private members Specifying a UCLASS as the type of a UPROPERTY So, you've constructed some custom UCLASS intended for use inside of UE4. But how do you instantiate them? Objects in UE4 are reference-counted and memory-managed, so you should not allocate them directly using the C++ keyword new. Instead, you'll have to use a function called ConstructObject to instantiate your UObject derivative. ConstructObject doesn't just take the C++ class of the object you are creating, it also requires a Blueprint class derivative of the C++ class (a UClass* reference). A UClass* reference is just a pointer to a Blueprint. How do we instantiate an instance of a particular Blueprint from C++ code? C++ code does not, and should not, know concrete UCLASS names, since these names are created and edited in the UE4 Editor, which you can only access after compilation. We need a way to somehow hand back the Blueprint class name to instantiate to the C++ code. The way we do this is by having the UE4 programmer select the UClass that the C++ code is to use from a simple dropdown menu listing all the Blueprints available (derived from a particular C++ class) inside the UE4 editor. To do this, we simply have to provide a user-editable UPROPERTY with a TSubclassOf<C++ClassName>-typed variable. Alternatively, you can use FStringClassReference to achieve the same objective. This makes selecting the UCLASS in the C++ code is exactly like selecting a Texture to use. UCLASS should be considered as resources to the C++ code, and their names should never be hard-coded into the code base. Getting ready In your UE4 code, you're often going to need to refer to different UCLASS in the project. For example, say you need to know the UCLASS of the player object so that you can use SpawnObject in your code on it. Specifying a UCLASS from C++ code is extremely awkward, because the C++ code is not supposed to know about the concrete instances of the derived UCLASS that were created in the Blueprints editor at all. Just as we don't want to bake specific asset names into the C++ code, we don't want to hard-code derived Blueprints class names into the C++ code. So, we use a C++ variable (for example, UClassOfPlayer), and select that from a Blueprints dialog in the UE4 editor. You can do so using a TSubclassOf member or an FStringClassReference member, as shown in the following screenshot: How to do it... Navigate to the C++ class that you'd like to add the UCLASS reference member to. For example, decking out a class derivative with the UCLASS of the player is fairly easy. From inside a UCLASS, use code of the following form to declare a UPROPERTY that allows selection of a UClass (Blueprint class) that derives from UObject in the hierarchy: UCLASS() class CHAPTER2_API UUserProfile : public UObject { UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = Unit) TSubclassOf<UObject> UClassOfPlayer; // Displays any UClasses // deriving from UObject in a dropdown menu in Blueprints // Displays string names of UCLASSes that derive from // the GameMode C++ base class UPROPERTY( EditAnywhere, meta=(MetaClass="GameMode"), Category = Unit ) FStringClassReference UClassGameMode; }; Blueprint the C++ class, and then open that Blueprint. Click on the drop-down menu beside your UClassOfPlayer menu. Select the appropriate UClassOfPlayer member from the drop-down menu of the listed UClass. How it works… TSubclassOf The TSubclassOf< > member will allow you to specify a UClass name using a drop-down menu inside of the UE4 editor when editing any Blueprints that have TSubclassOf< > members. FStringClassReference The MetaClass tag refers to the base C++ class from which you expect the UClassName to derive. This limits the drop-down menu's contents to only the Blueprints derived from that C++ class. You can leave the MetaClass tag out if you wish to display all the Blueprints in the project. Creating a Blueprint from your custom UCLASS Blueprinting is just the process of deriving a Blueprint class for your C++ object. Creating Blueprint-derived classes from your UE4 objects allows you to edit the custom UPROPERTY visually inside the editor. This avoids hardcoding any resources into your C++ code. In addition, in order for your C++ class to be placeable within the level, it must be Blueprinted first. But this is only possible if the C++ class underlying the Blueprint is an Actor class-derivative. There is a way to load resources (like textures) using FStringAssetReferences and StaticLoadObject. These pathways to loading resources (by hardcoding path strings into your C++ code) are generally discouraged, however. Providing an editable value in a UPROPERTY(), and loading from a proper concretely typed asset reference is a much better practice. Getting ready You need to have a constructed UCLASS that you'd like to derive a Blueprint class from (see the section on Making a UCLASS – Deriving from UObject given earlier in this article) in order to follow this recipe. You must have also marked your UCLASS as Blueprintable in the UCLASS macro for Blueprinting to be possible inside the engine. Any UObject-derived class with the meta keyword Blueprintable in the UCLASS macro declaration will be Blueprintable. How to do it… To Blueprint your UserProfile class, first ensure that UCLASS has the Blueprintable tag in the UCLASS macro. This should look as follows: UCLASS( Blueprintable ) class CHAPTER2_API UUserProfile : public UObject Compile and run your code. Find the UserProfile C++ class in the Class Viewer (Window | Developer Tools | Class Viewer). Since the previously created UCLASS does not derive from Actor, to find your custom UCLASS, you must turn off Filters | Actors Only in the Class Viewer (which is checked by default). Turn off the Actors Only check mark to display all the classes in the Class Viewer. If you don't do this, then your custom C++ class may not show! Keep in mind that you can use the small search box inside the Class Viewer to easily find the UserProfile class by starting to type it in. Find your UserProfile class in the Class Viewer, right-click on it, and create a Blueprint from it by selecting Create Blueprint… Name your Blueprint. Some prefer to prefix the Blueprint class name with BP_. You may choose to follow this convention or not, just be sure to be consistent. Double-click on your new Blueprint as it appears in the Content Browser, and take a look at it. You will be able to edit the Name and Email fields for each UserProfile Blueprint instance you create. How it works… Any C++ class you create that has the Blueprintable tag in its UCLASS macro can be Blueprinted within the UE4 editor. A blueprint allows you to customize properties on the C++ class in the visual GUI interface of UE4. Summary The UE4 code is, typically, very easy to write and manage once you know the patterns. The code we write to derive from another UCLASS, or to create a UPROPERTY or UFUNCTION is very consistent. This article provides recipes for common UE4 coding tasks revolving around basic UCLASS derivation, property and reference declaration, construction, destruction, and general functionality. Resources for Article: Further resources on this subject: Development Tricks with Unreal Engine 4[article] An overview of Unreal Engine[article] Overview of Unreal Engine 4[article]
Read more
  • 0
  • 0
  • 16805

article-image-auditing-mobile-applications
Packt
08 Jul 2016
48 min read
Save for later

Auditing Mobile Applications

Packt
08 Jul 2016
48 min read
In this article by Prashant Verma and Akshay Dikshit, author of the book Mobile Device Exploitation Cookbook we will cover the following topics: Auditing Android apps using static analysis Auditing Android apps using a dynamic analyzer Using Drozer to find vulnerabilities in Android applications Auditing iOS application using static analysis Auditing iOS application using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Finding vulnerabilities in WAP-based mobile apps Finding client-side injection Insecure encryption in mobile apps Discovering data leakage sources Other application-based attacks in mobile devices Launching intent injection in Android (For more resources related to this topic, see here.) Mobile applications such as web applications may have vulnerabilities. These vulnerabilities in most cases are the result of bad programming practices or insecure coding techniques, or may be because of purposefully injected bad code. For users and organizations, it is important to know how vulnerable their applications are. Should they fix the vulnerabilities or keep/stop using the applications? To address this dilemma, mobile applications need to be audited with the goal of uncovering vulnerabilities. Mobile applications (Android, iOS, or other platforms) can be analyzed using static or dynamic techniques. Static analysis is conducted by employing certain text or string based searches across decompiled source code. Dynamic analysis is conducted at runtime and vulnerabilities are uncovered in simulated fashion. Dynamic analysis is difficult as compared to static analysis. In this article, we will employ both static and dynamic analysis to audit Android and iOS applications. We will also learn various other techniques to audit findings, including Drozer framework usage, WAP-based application audits, and typical mobile-specific vulnerability discovery. Auditing Android apps using static analysis Static analysis is the mostcommonly and easily applied analysis method in source code audits. Static by definition means something that is constant. Static analysis is conducted on the static code, that is, raw or decompiled source code or on the compiled (object) code, but the analysis is conducted without the runtime. In most cases, static analysis becomes code analysis via static string searches. A very common scenario is to figure out vulnerable or insecure code patterns and find the same in the entire application code. Getting ready For conducting static analysis of Android applications, we at least need one Android application and a static code scanner. Pick up any Android application of your choice and use any static analyzer tool of your choice. In this recipe, we use Insecure Bank, which is a vulnerable Android application for Android security enthusiasts. We will also use ScriptDroid, which is a static analysis script. Both Insecure Bank and ScriptDroid are coded by Android security researcher, Dinesh Shetty. How to do it... Perform the following steps: Download the latest version of the Insecure Bank application from GitHub. Decompress or unzip the .apk file and note the path of the unzipped application. Create a ScriptDroid.bat file by using the following code: @ECHO OFF SET /P Filelocation=Please Enter Location: mkdir %Filelocation%OUTPUT :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.java" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt :: Code to check for insecure usage of SharedPreferences grep -H -i -n -C2 -e "putString" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" grep -H -i -n -C2 -e "MODE_PRIVATE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTModeprivate.txt" grep -H -i -n -C2 -e "MODE_WORLD_READABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldreadable.txt" grep -H -i -n -C2 -e "MODE_WORLD_WRITEABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldwritable.txt" grep -H -i -n -C2 -e "addPreferencesFromResource" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" :: Code to check for possible TapJacking attack grep -H -i -n -e filterTouchesWhenObscured="true" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" grep -H -i -n -e "<Button" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTtapjackings.txt" grep -H -i -n -v filterTouchesWhenObscured="true" "%Filelocation%OUTPUTtapjackings.txt" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" del %Filelocation%OUTPUTTemp_tapjacking.txt :: Code to check usage of external storage card for storing information grep -H -i -n -e "WRITE_EXTERNAL_STORAGE" "%Filelocation%........AndroidManifest.xml" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "getExternalStorageDirectory()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "sdcard" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "addJavascriptInterface()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -e "setJavaScriptEnabled(true)" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_probableXss.txt" >> "%Filelocation%OUTPUTprobableXss.txt" del %Filelocation%OUTPUTTemp_probableXss.txt :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_weakencryption.txt" >> "%Filelocation%OUTPUTWeakencryption.txt" del %Filelocation%OUTPUTTemp_weakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -C3 "http://" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "HttpURLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "URLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -C3 -e "URL" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -e "TrustAllSSLSocket-Factory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "AllTrustSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "NonValidatingSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" >> "%Filelocation%OUTPUTOtherUrlConnections.txt" del %Filelocation%OUTPUTTemp_OtherUrlConnection.txt grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_overhttp.txt" >> "%Filelocation%OUTPUTUnencryptedTransport.txt" del %Filelocation%OUTPUTTemp_overhttp.txt :: Code to check for Autocomplete ON grep -H -i -n -e "<Input" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_autocomp.txt" grep -H -i -n -v "textNoSuggestions" "%Filelocation%OUTPUTTemp_autocomp.txt" >> "%Filelocation%OUTPUTAutocompleteOn.txt" del %Filelocation%OUTPUTTemp_autocomp.txt :: Code to presence of possible SQL Content grep -H -i -n -e "rawQuery" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "compileStatement" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "db" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "sqlite" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "database" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "insert" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "delete" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "select" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "table" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "cursor" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_sqlcontent.txt" >> "%Filelocation%OUTPUTSqlcontents.txt" del %Filelocation%OUTPUTTemp_sqlcontent.txt :: Code to check for Logging mechanism grep -H -i -n -F "Log." "%Filelocation%*.java" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for Information in Toast messages grep -H -i -n -e "Toast.makeText" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Toast.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Toast.txt" >> "%Filelocation%OUTPUTToast_content.txt" del %Filelocation%OUTPUTTemp_Toast.txt :: Code to check for Debugging status grep -H -i -n -e "android:debuggable" "%Filelocation%*.java" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "getLastKnownLocation()|requestLocationUpdates()|getLatitude()|getLongitude() |LOCATION" "%Filelocation%*.java" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for possible Intent Injection grep -H -i -n -C3 -e "Action.getIntent(" "%Filelocation%*.java" >> "%Filelocation%OUTPUTIntentValidation.txt" How it works... Go to the command prompt and navigate to the path where ScriptDroid is placed. Run the .bat file and it prompts you to input the path of the application for which you wish toperform static analysis. In our case we provide it with the path of the Insecure Bank application, precisely the path where Java files are stored. If everything worked correctly, the screen should look like the following: The script generates a folder by the name OUTPUT in the path where the Java files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The combination ofScriptDroid and Insecure Bank gives a very nice view of various Android vulnerabilities; usually the same is not possible with live apps. Consider the following points, for instance: Weakencryption.txt has listed down the instances of Base64 encoding used for passwords in the Insecure Bank application Logging.txt contains the list of insecure log functions used in the application SdcardStorage.txt contains the code snippet pertaining to the definitions related to data storage in SD Cards Details like these from static analysis are eye-openers in letting us know of the vulnerabilities in our application, without even running the application. There's more... Thecurrent recipe used just ScriptDroid, but there are many other options available. You can either choose to write your own script or you may use one of the free orcommercial tools. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also https://github.com/dineshshetty/Android-InsecureBankv2 Auditing iOS application using static analysis Auditing Android apps a using a dynamic analyzer Dynamic analysis isanother technique applied in source code audits. Dynamic analysis is conducted in runtime. The application is run or simulated and the flaws or vulnerabilities are discovered while the application is running. Dynamic analysis can be tricky, especially in the case of mobile platforms. As opposed to static analysis, there are certain requirements in dynamic analysis, such as the analyzer environment needs to be runtime or a simulation of the real runtime.Dynamic analysis can be employed to find vulnerabilities in Android applications which aredifficult to find via static analysis. A static analysis may let you know a password is going to be stored, but dynamic analysis reads the memory and reveals the password stored in runtime. Dynamic analysis can be helpful in tampering data in transmission during runtime that is, tampering with the amount in a transaction request being sent to the payment gateway. Some Android applications employ obfuscation to prevent attackers reading the code; Dynamic analysis changes the whole game in such cases, by revealing the hardcoded data being sent out in requests, which is otherwise not readable in static analysis. Getting ready For conducting dynamic analysis of Android applications, we at least need one Android application and a dynamic code analyzer tool. Pick up any Android application of your choice and use any dynamic analyzer tool of your choice. The dynamic analyzer tools can be classified under two categories: The tools which run from computers and connect to an Android device or emulator (to conduct dynamic analysis) The tools that can run on the Android device itself For this recipe, we choose a tool belonging to the latter category. How to do it... Perform the following steps for conducting dynamic analysis: Have an Android device with applications (to be analyzed dynamically) installed. Go to the Play Store and download Andrubis. Andrubis is a tool from iSecLabs which runs on Android devices and conducts static, dynamic, and URL analysis on the installed applications. We will use it for dynamic analysis only in this recipe. Open the Andrubis application on your Android device. It displays the applications installed on the Android device and analyzes these applications. How it works... Open the analysis of the application of your interest. Andrubis computes an overall malice score (out of 10) for the applications and gives the color icon in front of its main screen to reflect the vulnerable application. We selected anorange coloredapplication to make more sense with this recipe. This is how the application summary and score is shown in Andrubis: Let us navigate to the Dynamic Analysis tab and check the results: The results are interesting for this application. Notice that all the files going to be written by the application under dynamic analysis are listed down. In our case, one preferences.xml is located. Though the fact that the application is going to create a preferences file could have been found in static analysis as well, additionally, dynamic analysis confirmed that such a file is indeed created. It also confirms that the code snippet found in static analysis about the creation of a preferences file is not a dormant code but a file that is going to be created. Further, go ahead and read the created file and find any sensitive data present there. Who knows, luck may strike and give you a key to hidden treasure. Notice that the first screen has a hyperlink, View full report in browser. Tap on it and notice that the detailed dynamic analysis is presented for your further analysis. This also lets you understand what the tool tried and what response it got. This is shown in the following screenshot: There's more... The current recipe used a dynamic analyzer belonging to the latter category. There are many other tools available in the former category. Since this is an Android platform, many of them are open source tools. DroidBox can be tried for dynamic analysis. It looks for file operations (read/write), network data traffic, SMS, permissions, broadcast receivers, and so on, among other checks.Hooker is another tool that can intercept and modify API calls initiated from the application. This is veryuseful indynamic analysis. Try hooking and tampering with data in API calls. See also https://play.google.com/store/apps/details?id=org.iseclab.andrubis https://code.google.com/p/droidbox/ https://github.com/AndroidHooker/hooker Using Drozer to find vulnerabilities in Android applications Drozer is a mobile security audit and attack framework, maintained by MWR InfoSecurity. It is a must-have tool in the tester's armory. Drozer (Android installed application) interacts with other Android applications via IPC (Inter Process Communication). It allows fingerprinting of application package-related information, its attack surface, and attempts to exploit those. Drozer is an attack framework and advanced level exploits can be conducted from it. We use Drozer to find vulnerabilities in our applications. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and follow the installation instructions mentioned in the user guide. Install Drozer console agent and start a session as mentioned in the User Guide. If your installation is correct, you should get Drozer command prompt (dz>). You should also have a few vulnerable applications as well to analyze. Here we chose OWASP GoatDroid application. How to do it... Every pentest starts with fingerprinting. Let us useDrozer for the same. The Drozer User Guide is very helpful for referring to the commands. The following command can be used to obtain information about anAndroid application package: run app.package.info -a <package name> We used the same to extract the information from the GoatDroid application and found the following results: Notice that apart from the general information about the application, User Permissions are also listed by Drozer. Further, let us analyze the attack surface. Drozer's attack surface lists the exposed activities, broadcast receivers, content providers, and services. The in-genuinely exposed ones may be a critical security risk and may provide you access to privileged content. Drozer has the following command to analyze the attack surface: run app.package.attacksurface <package name> We used the same to obtain the attack surface of the Herd Financial application of GoatDroid and the results can be seen in the following screenshot. Notice that one Activity and one Content Provider are exposed. We chose to attack the content provider to obtain the data stored locally. We used the followingDrozer command to analyze the content provider of the same application: run app.provider.info -a <package name> This gave us the details of the exposed content provider, which we used in another Drozer command: run scanner.provider.finduris -a <package name> We could successfully query the content providers. Lastly, we would be interested in stealing the data stored by this content provider. This is possible via another Drozer command: run app.provider.query content://<content provider details>/ The entire sequence of events is shown in the following screenshot: How it works... ADB is used to establish a connection between Drozer Python server (present on computer) and Drozer agent (.apk file installed in emulator or Android device). Drozer console is initialized to run the various commands we saw. Drozer agent utilizes theAndroid OS feature of IPC to take over the role of the target application and run the various commands as the original application. There's more... Drozer not only allows users to obtain the attack surface and steal data via content providers or launch intent injection attacks, but it is way beyond that. It can be used to fuzz the application, cause local injection attacks by providing a way to inject payloads. Drozer can also be used to run various in-built exploits and can be utilized to attack Android applications via custom-developed exploits. Further, it can also run in Infrastructure mode, allowing remote connections and remote attacks. See also Launching intent injection in Android https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf Auditing iOS application using static analysis Static analysis in source code reviews is an easier technique, and employing static string searches makes it convenient to use.Static analysis is conducted on the raw or decompiled source code or on the compiled (object) code, but the analysis is conducted outside of runtime. Usually, static analysis figures out vulnerable or insecure code patterns. Getting ready For conducting static analysis of iOS applications, we need at least one iOS application and a static code scanner. Pick up any iOS application of your choice and use any static analyzer tool of your choice. We will use iOS-ScriptDroid, which is a static analysis script, developed by Android security researcher, Dinesh Shetty. How to do it... Keep the decompressed iOS application filed and note the path of the folder containing the .m files. Create an iOS-ScriptDroid.bat file by using the following code: ECHO Running ScriptDriod ... @ECHO OFF SET /P Filelocation=Please Enter Location: :: SET Filelocation=Location of the folder containing all the .m files eg: C:sourcecodeproject iOSxyz mkdir %Filelocation%OUTPUT :: Code to check for Sensitive Information storage in Phone memory grep -H -i -n -C2 -e "NSFile" "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" grep -H -i -n -e "writeToFile " "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" :: Code to check for possible Buffer overflow grep -H -i -n -e "strcat(|strcpy(|strncat(|strncpy(|sprintf(|vsprintf(|gets(" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBufferOverflow.txt" :: Code to check for usage of URL Schemes grep -H -i -n -C2 "openUrl|handleOpenURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTURLSchemes.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "webview" "%Filelocation%*.m" >> "%Filelocation%OUTPUTprobableXss.txt" :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTtweakencryption.txt" >> "%Filelocation%OUTPUTweakencryption.txt" del %Filelocation%OUTPUTtweakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -e "http://" "%Filelocation%*.m" >> "%Filelocation%OUTPUToverhttp.txt" grep -H -i -n -e "NSURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "URL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "writeToUrl" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "NSURLConnection" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "CFStream" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "NSStreamin" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "setAllowsAnyHTTPSCertificate|kCFStreamSSLAllowsExpiredRoots |kCFStreamSSLAllowsExpiredCertificates" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "kCFStreamSSLAllowsAnyRoot|continueWithoutCredentialForAuthenticationChallenge" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" ::to add check for "didFailWithError" :: Code to presence of possible SQL Content grep -H -i -F -e "db" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "database" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "insert" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "delete" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "select" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "table" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "cursor" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_prepare" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_compile" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" :: Code to check for presence of keychain usage source code grep -H -i -n -e "kSecASttr|SFHFKkey" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for Logging mechanism grep -H -i -n -F "NSLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "XLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "ZNLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for presence of password in source code grep -H -i -n -e "password|pwd" "%Filelocation%*.m" >> "%Filelocation%OUTPUTpassword.txt" :: Code to check for Debugging status grep -H -i -n -e "#ifdef DEBUG" "%Filelocation%*.m" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers ===need to work more on this grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "CLLocationManager|startUpdatingLocation|locationManager|didUpdateToLocation |CLLocationDegrees|CLLocation|CLLocationDistance|startMonitoringSignificantLocationChanges" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.m" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt How it works... Go to the command prompt and navigate to the path where iOS-ScriptDroid is placed. Run the batch file and it prompts you to input the path of the application for which you wish to perform static analysis. In our case, we arbitrarily chose an application and inputted the path of the implementation (.m) files. The script generates a folder by the name OUTPUT in the path where the .m files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The iOS-ScriptDroid gives first hand info of various iOS applications vulnerabilities present in the current applications. For instance, here are a few of them which are specific to the iOS platform. BufferOverflow.txt contains the usage of harmful functions when missing buffer limits such as strcat, strcpy, and so on are found in the application. URL Schemes, if implemented in an insecure manner, may result in access related vulnerabilities. Usage of URL schemes is listed in URLSchemes.txt. These are sefuuseful vulnerabilitydetails to know iniOS applications via static analysis. There's more... The current recipe used just iOS-ScriptDroid but there are many other options available. You can either choose to write your own script or you may use one of the free or commercial tools available. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also Auditing Android apps using static analysis Auditing iOS application using a dynamic analyzer Dynamic analysis is theruntime analysis of the application. The application is run or simulated to discover the flaws during runtime. Dynamic analysis can be tricky, especially in the case of mobile platforms. Dynamic analysis is helpful in tampering data in transmission during runtime, for example, tampering with the amount in a transaction request being sent to a payment gateway. In applications that use custom encryption to prevent attackers reading the data, dynamic analysis is useful in revealing the encrypted data, which can be reverse-engineered. Note that since iOS applications cannot be decompiled to the full extent, dynamic analysis becomes even more important in finding the sensitive data which could have been hardcoded. Getting ready For conducting dynamic analysis of iOS applications, we need at least one iOS application and a dynamic code analyzer tool. Pick up any iOS application of your choice and use any dynamic analyzer tool of your choice. In this recipe, let us use the open source tool Snoop-it. We will use an iOS app that locks files which can only be opened using PIN, pattern, and a secret question and answer to unlock and view the file. Let us see if we can analyze this app and find a security flaw in it using Snoop-it. Please note that Snoop-it only works on jailbroken devices. To install Snoop-it on your iDevice, visit https://code.google.com/p/snoop-it/wiki/GettingStarted?tm=6. We have downloaded Locker Lite from the App Store onto our device, for analysis. How to do it... Perform the following steps to conductdynamic analysis oniOS applications: Open the Snoop-it app by tapping on its icon. Navigate to Settings. Here you will see the URL through which the interface can be accessed from your machine: Please note the URL, for we will be using it soon. We have disabled authentication for our ease. Now, on the iDevice, tap on Applications | Select App Store Apps and select the Locker app: Press the home button, and open the Locker app. Note that on entering the wrong PIN, we do not get further access: Making sure the workstation and iDevice are on the same network, open the previously noted URL in any browser. This is how the interface will look: Click on the Objective-C Classes link under Analysis in the left-hand panel: Now, click on SM_LoginManagerController. Class information gets loaded in the panel to the right of it. Navigate down until you see -(void) unlockWasSuccessful and click on the radio button preceding it: This method has now been selected. Next, click on the Setup and invoke button on the top-right of the panel. In the window that appears, click on the Invoke Method button at the bottom: As soon as we click on thebutton, we notice that the authentication has been bypassed, and we can view ourlocked file successfully. How it works... Snoop-it loads all classes that are in the app, and indicates the ones that are currently operational with a green color. Since we want to bypass the current login screen, and load directly into the main page, we look for UIViewController. Inside UIViewController, we see SM_LoginManagerController, which could contain methods relevant to authentication. On observing the class, we see various methods such as numberLoginSucceed, patternLoginSucceed, and many others. The app calls the unlockWasSuccessful method when a PIN code is entered successfully. So, when we invoke this method from our machine and the function is called directly, the app loads the main page successfully. There's more... The current recipe used just onedynamic analyzer but other options and tools can also be employed. There are many challenges in doingdynamic analysis of iOS applications. You may like to use multiple tools and not just rely on one to overcome the challenges. See also https://code.google.com/p/snoop-it/ Auditing Android apps using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Keychain iniOS is an encrypted SQLite database that uses a 128-bit AES algorithm to hold identities and passwords. On any iOS device, theKeychain SQLite database is used to store user credentials such as usernames, passwords, encryption keys, certificates, and so on. Developers use this service API to instruct the operating system to store sensitive data securely, rather than using a less secure alternative storage mechanism such as a property list file or a configuration file. In this recipe we will be analyzing Keychain dump to discover stored credentials. Getting ready Please follow the given steps to prepare for Keychain dump analysis: Jailbreak the iPhone or iPad. Ensure the SSH server is running on the device (default after jailbreak). Download the Keychain_dumper binary from https://github.com/ptoomey3/Keychain-Dumper Connect the iPhone and the computer to the same Wi-Fi network. On the computer, run SSH into the iPhone by typing the iPhone IP address, username as root, and password as alpine. How to do it... Follow these steps toexamine security vulnerabilities in iOS: Copy keychain_dumper into the iPhone or iPad by issuing the following command: scp root@<device ip>:keychain_dumper private/var/tmp Alternatively, Windows WinSCP can be used to do the same: Once the binary has been copied, ensure the keychain-2.db has read access: chmod +r /private/var/Keychains/keychain-2.db This is shown in the following screenshot: Give executable right to binary: chmod 777 /private/var/tmp/keychain_dumper Now, we simply run keychain_dumper: /private/var/tmp/keychain_dumper This command will dump all keychain information, which will contain all the generic and Internet passwords stored in the keychain: How it works... Keychain in an iOS device is used to securely store sensitive information such as credentials, such as usernames, passwords, authentication tokens for different applications, and so on, along with connectivity (Wi-Fi/VPN) credentials and so on. It is located on iOS devices as an encrypted SQLite database file located at /private/var/Keychains/keychain-2.db. Insecurity arises when application developers use this feature of the operating system to store credentials rather than storing it themselves in NSUserDefaults, .plist files, and so on. To provide users the ease of not having to log in every time and hence saving the credentials in the device itself, the keychain information for every app is stored outside of its sandbox. There's more... This analysis can also be performed for specific apps dynamically, using tools such as Snoop-it. Follow the steps to hook Snoop-it to the target app, click on Keychain Values, and analyze the attributes to see its values reveal in the Keychain. More will be discussed in further recipes. Finding vulnerabilities in WAP-based mobile apps WAP-based mobile applications are mobile applications or websites that run on mobile browsers. Most organizations create a lightweight version of their complex websites to be able to run easily and appropriately in mobile browsers. For example, a hypothetical company called ABCXYZ may have their main website at www.abcxyz.com, while their mobile website takes the form m.abcxyz.com. Note that the mobile website (or WAP apps) are separate from their installable application form, such as .apk on Android. Since mobile websites run on browsers, it is very logical to say that most of the vulnerabilities applicable to web applications are applicable to WAP apps as well. However, there are caveats to this. Exploitability and risk ratings may not be the same. Moreover, not all attacks may be directly applied or conducted. Getting ready For this recipe, make sure to be ready with the following set of tools (in the case of Android): ADB WinSCP Putty Rooted Android mobile SSH proxy application installed on Android phone Let us see the common WAP application vulnerabilities. While discussing these, we will limit ourselves to mobilebrowsers only: Browser cache: Android browsers store cache in two different parts—content cache and component cache. Content cache may contain basic frontend components such as HTML, CSS, or JavaScript. Component cache contains sensitive data like the details to be populated once content cache is loaded. You have to locate the browser cache folder and find sensitive data in it. Browser memory: Browser memory refers to the location used by browsers to store the data. Memory is usually long-term storage, while cache is short-term. Browse through the browser memory space for various files such as .db, .xml, .txt, and so on. Check all these files for the presence of sensitive data. Browser history: Browser history contains the list of the URLs browsed by the user. These URLs in GET request format contain parameters. Again, our goal is to locate a URL with sensitive data for our WAP application. Cookies: Cookies are mechanisms for websites to keep track of user sessions. Cookies are stored locally in devices. Following are the security concerns with respect to cookie usage: Sometimes a cookie contains sensitive information Cookie attributes, if weak, may make the application security weak Cookie stealing may lead to a session hijack How to do it... Browser Cache: Let's look at the steps that need to be followed with browser cache: Android browser cache can be found at this location: /data/data/com.android.browser/cache/webviewcache/. You can use either ADB to pull the data from webviewcache, or use WinSCP/Putty and connect to SSH application in rooted Android phones. Either way, you will land up at the webviewcache folder and find arbitrarily named files. Refer to the highlighted section in the following screenshot: Rename the extension of arbitrarily named files to .jpg and you will be able to view the cache in screenshot format. Search through all files for sensitive data pertaining to the WAP app you are searching for. Browser Memory: Like an Android application, browser also has a memory space under the /data/data folder by the name com.android.browser (default browser). Here is how a typical browser memory space looks: Make sure you traverse through all the folders to get the useful sensitive data in the context of the WAP application you are looking for. Browser history Go to browser, locate options, navigate to History, and find the URLs present there. Cookies The files containing cookie values can be found at /data/data/com.android.browser/databases/webview.db. These DB files can be opened with the SQLite Browser tool and cookies can be obtained. There's more... Apart from the primary vulnerabilities described here mainly concerned with browser usage, all otherweb application vulnerabilities which are related to or exploited from or within a browser are applicable and need to be tested: Cross-site scripting, a result of a browser executing unsanitized harmful scripts reflected by the servers is very valid for WAP applications. The autocomplete attribute not turned to off may result in sensitive data remembered by the browser for returning users. This again is a source of data leakage. Browser thumbnails and image buffer are other sources to look for data. Above all, all the vulnerabilities in web applications, which may not relate to browser usage, apply. These include OWASP Top 10 vulnerabilities such as SQL injection attacks, broken authentication and session management, and so on. Business logic validation is another important check to bypass. All these are possible by setting a proxy to the browser and playing around with the mobile traffic. The discussion of this recipe has been around Android, but all the discussion is fully applicable to an iOS platform when testing WAP applications. Approach, steps to test, and the locations would vary, but all vulnerabilities still apply. You may want to try out iExplorer and plist editor tools when working with an iPhone or iPad. See also http://resources.infosecinstitute.com/browser-based-vulnerabilities-in-web-applications/ Finding client-side injection Client-side injection is a new dimension to the mobile threat landscape. Client side injection (also known as local injection) is a result of the injection of malicious payloads to local storage to reveal data not by the usual workflow of the mobile application. If 'or'1'='1 is injected in a mobile application on search parameter, where the search functionality is built to search in the local SQLite DB file, this results in revealing all data stored in the corresponding table of SQLite DB; client side SQL injection is successful. Notice that the payload did not to go the database on the server side (which possibly can be Oracle or MSSQL) but it did go to the local database (SQLite) in the mobile. Since the injection point and injectable target are local (that is, mobile), the attack is called a client side injection. Getting ready To get ready to find client side injection, have a few mobile applications ready to be audited and have a bunch of tools used in many other recipes throughout this book. Note that client side injection is not easy to find on account of the complexities involved; many a time you will have to fine-tune your approach as per the successful first signs. How to do it... The prerequisite to the existence of client side injection vulnerability in mobile apps is the presence of a local storage and an application feature which queries the local storage. For the convenience of the first discussion, let us learn client side SQL injection, which is fairly easy to learn as users know very well SQL Injection in web apps. Let us take the case of a mobile banking application which stores the branch details in a local SQLite database. The application provides a search feature to users wishing to search a branch. Now, if a person types in the city as Mumbai, the city parameter is populated with the value Mumbai and the same is dynamically added to the SQLite query. The query builds and retrieves the branch list for Mumbai city. (Usually, purely local features are provided for faster user experience and network bandwidth conservation.) Now if a user is able to inject harmful payloads into the city parameter, such as a wildcard character or a SQLite payload to the drop table, and the payloads execute revealing all the details (in the case of a wildcard) or the payload drops the table from the DB (in the case of a drop table payload) then you have successfully exploited client side SQL injection. Another type of client side injection, presented in OWASP Mobile TOP 10 release, is local cross-site scripting (XSS). Refer to slide number 22 of the original OWASP PowerPoint presentation here: http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks. They referred to it as Garden Variety XSS and presented a code snippet, wherein SMS text was accepted locally and printed at UI. If a script was inputted in SMS text, it would result in local XSS (JavaScript Injection). There's more... In a similar fashion, HTML Injection is also possible. If an HTML file contained in the application local storage can be compromised to contain malicious code and the application has a feature which loads or executes this HTML file, HTML injection is possible locally. A variant of the same may result in Local File Inclusion (LFI) attacks. If data is stored in the form of XML files in the mobile, local XML Injection can also be attempted. There could be morevariants of these attacks possible. Finding client-side injection is quite difficult and time consuming. It may need to employ both static and dynamic analysis approaches. Most scanners also do not support discovery of Client Side Injection. Another dimension to Client Side Injection is the impact, which is judged to be low in most cases. There is a strong counter argument to this vulnerability. If the entire local storage can be obtained easily in Android, then why do we need to conduct Client Side Injection? I agree to this argument in most cases, as the entire SQLite or XML file from the phone can be stolen, why spend time searching a variable that accepts a wildcard to reveal the data from the SQLite or XML file? However, you should still look out for this vulnerability, as HTML injection or LFI kind of attacks have malware-corrupted file insertion possibility and hence the impactful attack. Also, there are platforms such as iOS where sometimes, stealing the local storage is very difficult. In such cases, client side injection may come in handy. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M7 http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks Insecure encryption in mobile apps Encryption is one of the misused terms in information security. Some people confuse it with hashing, while others may implement encoding and call itencryption. symmetric key and asymmetric key are two types of encryption schemes. Mobile applications implement encryption to protect sensitive data in storage and in transit. While doing audits, your goal should be to uncover weak encryption implementation or the so-called encoding or other weaker forms, which are implemented in places where a proper encryption should have been implemented. Try to circumvent the encryption implemented in the mobile application under audit. Getting ready Be ready with a fewmobile applications and tools such as ADB and other file and memory readers, decompiler and decoding tools, and so on. How to do it... There are multiple types of faulty implementation ofencryption in mobile applications. There are different ways to discover each of them: Encoding (instead of encryption): Many a time, mobile app developers simply implement Base64 or URL encoding in applications (an example of security by obscurity). Such encoding can be discovered by simply doing static analysis. You can use the script discussed in the first recipe of this article for finding out such encoding algorithms. Dynamic analysis will help you obtain the locally stored data in encoded format. Decoders for these known encoding algorithms are available freely. Using any of those, you will be able to uncover the original value. Thus, such implementation is not a substitute for encryption. Serialization (instead of encryption): Another variation of faulty implementation is serialization. Serialization is the process of conversion of data objects to byte stream. The reverse process, deserialization, is also very simple and the original data can be obtained easily. Static Analysis may help reveal implementations using serialization. Obfuscation (instead of encryption): Obfuscation also suffers from similar problems and the obfuscated values can be deobfuscated. Hashing (instead of encryption): Hashing is a one-way process using a standard complex algorithm. These one-way hashes suffer from a major problem in that they can be replayed (without needing to recover the original data). Also, rainbow tables can be used to crack the hashes. Like other techniques described previously, hashing usage in mobile applications can also be discovered via static analysis. Dynamic analysis may additionally be employed to reveal the one-way hashes stored locally. How it works... To understand the insecure encryption in mobile applications, let us take a live case, which we observed. An example of weak custom implementation While testing a live mobile banking application, me and my colleagues came across a scenario where a userid and mpin combination was sent by a custom encoding logic. The encoding logic here was based on a predefined character by character replacement by another character, as per an in-built mapping. For example: 2 is replaced by 4 0 is replaced by 3 3 is replaced by 2 7 is replaced by = a is replaced by R A is replaced by N As you can notice, there is no logic to the replacement. Until you uncover or decipher the whole in-built mapping, you won't succeed. A simple technique is to supply all possible characters one-by-one and watch out for the response. Let's input userid and PIN as 222222 and 2222 and notice the converted userid and PIN are 444444 and 4444 respectively, as per the mapping above. Go ahead and keep changing the inputs, you will create a full mapping as is used in the application. Now steal the user's encoded data and apply the created mapping, thereby uncovering the original data. This whole approach is nicely described in the article mentioned under the See also section of this recipe. This is a custom example of faulty implementation pertaining to encryption. Such kinds of faults are often difficult to find in static analysis, especially in the case of difficult to reverse apps such as iOS applications. The possibility of automateddynamic analysis discovering this is also difficult. Manual testing and analysis stands, along with dynamic or automated analysis, a better chance of uncovering such customimplementations. There's more... Finally, I would share another application we came across. This one used proper encryption. The encryption algorithm was a well known secure algorithm and the key was strong. Still, the whole encryption process can be reversed. The application had two mistakes; we combined both of them to break the encryption: The application code had the standard encryption algorithm in the APK bundle. Not even obfuscation was used to protect the names at least. We used the simple process of APK to DEX to JAR conversion to uncover the algorithm details. The application had stored the strong encryption key in the local XML file under the /data/data folder of the Android device. We used adb to read this xml file and hence obtained the encryption key. According to Kerckhoff's principle, the security of a cryptosystem should depend solely on the secrecy of the key and the private randomizer. This is how all encryption algorithms are implemented. The key is the secret, not the algorithm. In our scenario, we could obtain the key and know the name of the encryption algorithm. This is enough to break the strong encryption implementation. See also http://www.paladion.net/index.php/mobile-phone-data-encryption-why-is-it-necessary/ Discovering data leakage sources Data leakage risk worries organizations across the globe and people have been implementing solutions to prevent data leakage. In the case of mobile applications, first we have to think what could be the sources or channels for data leakage possibility. Once this is clear, devise or adopt a technique to uncover each of them. Getting ready As in other recipes, here also you need bunch of applications (to be analyzed), an Android device or emulator, ADB, DEX to JAR converter, Java decompilers, Winrar, or Winzip. How to do it... To identify the data leakage sources, list down all possible sources you can think of for the mobile application under audit. In general, all mobile applications have the following channels of potential data leakage: Files stored locally Client side source code Mobile device logs Web caches Console messages Keystrokes Sensitive data sent over HTTP How it works... The next step is to uncover the data leakage vulnerability at these potential channels. Let us see the six previously identified common channels: Files stored locally: By this time, readers are very familiar with this. The data is stored locally in files like shared preferences, xml files, SQLite DB, and other files. In Android, these are located inside the application folder under /data/data directory and can be read using tools such as ADB. In iOS, tools such as iExplorer or SSH can be used to read the application folder. Client side source code: Mobile application source code is present locally in the mobile device itself. The source code in applications has been hardcoding data, and a common mistake is hardcoding sensitive data (either knowingly or unknowingly). From the field, we came across an application which had hardcoded the connection key to the connected PoS terminal. Hardcoded formulas to calculate a certain figure, which should have ideally been present in the server-side code, was found in the mobile app. Database instance names and credentials are also a possibility where the mobile app directly connects to a server datastore. In Android, the source code is quite easy to decompile via a two-step process—APK to DEX and DEX to JAR conversion. In iOS, the source code of header files can be decompiled up to a certain level using tools such as classdump-z or otool. Once the raw source code is available, a static string search can be employed to discover sensitive data in the code. Mobile device logs: All devices create local logs to store crash and other information, which can be used to debug or analyze a security violation. A poor coding may put sensitive data in local logs and hence data can be leaked from here as well. Android ADB command adb logcat can be used to read the logs on Android devices. If you use the same ADB command for the Vulnerable Bank application, you will notice the user credentials in the logs as shown in the following screenshot: Web caches: Web caches may also contain the sensitive data related to web components used in mobile apps. We discussed how to discover this in the WAP recipe in this article previously. Console messages: Console messages are used by developers to print messages to the console while application development and debugging is in progress. Console messages, if not turned off while launching the application (GO LIVE), may be another source of data leakage. Console messages can be checked by running the application in debug mode. Keystrokes: Certain mobile platforms have been known to cache key strokes. A malware or key stroke logger may take advantage and steal a user's key strokes, hence making it another data leakage source. Malware analysis needs to be performed to uncover embedded or pre-shipped malware or keystroke loggers with the application. Dynamic analysis also helps. Sensitive data sent over HTTP: Applications either send sensitive data over HTTP or use a weak implementation of SSL. In either case, sensitive data leakage is possible. Usage of HTTP can be found via static analysis to search for HTTP strings. Dynamic analysis to capture the packets at runtime also reveals whether traffic is over HTTP or HTTPS. There are various SSL-related weak implementation and downgrade attacks, which make data vulnerable to sniffing and hence data leakage. There's more... Data leakage sources can be vast and listing all of them does not seem possible. Sometimes there are applications or platform-specific data leakage sources, which may call for a different kind of analysis. Intent injection can be used to fire intents to access privileged contents. Such intents may steal protected data such as the personal information of all the patients in a hospital (under HIPPA compliance). iOS screenshot backgrounding issues, where iOS applications store screenshots with populated user input data, on the iPhone or iPAD when the application enters background. Imagine such screenshots containing a user's credit card details, CCV, expiry date, and so on, are found in an application under PCI-DSS compliance. Malwares give a totally different angle to data leakage. Note that data leakage is a very big risk organizations are tackling today. It is not just financial loss; losses may be intangible, such as reputation damage, or compliance or regulatory violations. Hence, it makes it very important to identify the maximum possible data leakage sources in the application and rectify the potential leakages. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M4 Launching intent injection in Android Other application-based attacks in mobile devices When we talk about application-based attacks, OWASP TOP 10 risks are the very first things that strike. OWASP (www.owasp.org) has a dedicated project to mobile security, which releases Mobile Top 10. OWASP gathers data from industry experts and ranks the top 10 risks every three years. It is a very good knowledge base for mobile application security. Here is the latest Mobile Top 10 released in the year 2014: M1: Weak Server Side Controls M2: Insecure Data Storage M3: Insufficient Transport Layer Protection M4: Unintended Data Leakage M5: Poor Authorization and Authentication M6: Broken Cryptography M7: Client Side Injection M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling M10: Lack of Binary Protections Getting ready Have a few applications ready to be analyzed, use the same set of tools we have been discussing till now. How to do it... In this recipe, we restrict ourselves to other application attacks. The attacks which we have not covered till now in this book are: M1: Weak Server Side Controls M5: Poor Authorization and Authentication M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling How it works... Currently, let us discuss client-side or mobile-side issues for M5, M8, and M9. M5: Poor Authorization and Authentication A few common scenarios which can be attacked are: Authentication implemented at device level (for example, PIN stored locally) Authentication bound on poor parameters (such as UDID or IMEI numbers) Authorization parameter responsible for access to protected application menus is stored locally These can be attacked by reading data using ADB, decompiling the applications, and conducting static analysis on the same or by doing dynamic analysis on the outgoing traffic. M8: Security Decisions via Untrusted Inputs This one talks about IPC. IPC entry points forapplications to communicate to one other, such as Intents in Android or URL schemes in iOS, are vulnerable. If the origination source is not validated, the application can be attacked. Malicious intents can be fired to bypass authorization or steal data. Let us discuss this in further detail in the next recipe. URL schemes are a way for applications to specify the launch of certain components. For example, the mailto scheme in iOS is used to create a new e-mail. If theapplications fail to specify the acceptable sources, any malicious application will be able to send a mailto scheme to the victim application and create new e-mails. M9: Improper Session Handling From a purely mobile device perspective, session tokens stored in .db files or oauth tokens, or strings granting access stored in weakly protected files, are vulnerable. These can be obtained by reading the local data folder using ADB. See also https://www.owasp.org/index.php/P;rojects/OWASP_Mobile_Security_Project_-_Top_Ten_Mobile_Risks Launching intent injection in Android Android uses intents to request action from another application component. A common communication is passing Intent to start a service. We will exploit this fact via an intent injection attack. An intent injection attack works by injecting intent into the application component to perform a task that is usually not allowed by the application workflow. For example, if the Android application has a login activity which, post successful authentication, allows you access to protected data via another activity. Now if an attacker can invoke the internal activity to access protected data by passing an Intent, it would be an Intent Injection attack. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and following the installation instructions mentioned in the User Guide. Install Drozer Console Agent and start a session as mentioned in the User Guide. If your installation is correct, you should get a Drozer command prompt (dz>).   How to do it... You should also have a few vulnerable applications to analyze. Here we chose the OWASP GoatDroid application: Start the OWASP GoatDroid Fourgoats application in emulator. Browse the application to develop understanding. Note that you are required to authenticate by providing a username and password, and post-authentication you can access profile and other pages. Here is the pre-login screen you get: Let us now use Drozer to analyze the activities of the Fourgoats application. The following Drozer command is helpful: run app.activity.info -a <package name> Drozer detects four activities with null permission. Out of these four, ViewCheckin and ViewProfile are post-login activities. Use Drozer to access these two activities directly, via the following command: run app.activity.start --component <package name> <activity name> We chose to access ViewProfile activity and the entire sequence of activities is shown in the following screenshot: Drozer performs some actions and the protected user profile opens up in the emulator, as shown here: How it works... Drozer passed an Intent in the background to invoke the post-login activity ViewProfile. This resulted in ViewProfile activity performing an action resulting in display of profile screen. This way, an intent injection attack can be performed using Drozer framework. There's more... Android usesintents also forstarting a service or delivering a broadcast. Intent injection attacks can be performed on services and broadcast receivers. A Drozer framework can also be used to launch attacks on the app components. Attackers may write their own attack scripts or use different frameworks to launch this attack. See also Using Drozer to find vulnerabilities in Android applications https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf https://www.eecs.berkeley.edu/~daw/papers/intents-mobisys11.pdf Resources for Article: Further resources on this subject: Mobile Devices[article] Development of Windows Mobile Applications (Part 1)[article] Development of Windows Mobile Applications (Part 2)[article]
Read more
  • 0
  • 0
  • 30291

article-image-implementing-artificial-neural-networks-tensorflow
Packt
08 Jul 2016
12 min read
Save for later

Implementing Artificial Neural Networks with TensorFlow

Packt
08 Jul 2016
12 min read
In this article by Giancarlo Zaccone, the author of Getting Started with TensorFlow, we will learn about artificial neural networks (ANNs), an information processing system whose operating mechanism is inspired by biological neural circuits. Thanks to their characteristics, neural networks are the protagonists of a real revolution in machine learning systems and more generally in the context of Artificial Intelligence. An artificial neural network possesses many simple processing units variously connected to each other, according to various architectures. If we look at the schema of an ANN report, it can be seen that the hidden units communicate with the external layer, both in input and output, while the input and output units communicate only with the hidden layer of the network Each unit or node simulates the role of the neuron in biological neural networks, a node, said artificial neuron, plays a very simple operation: becomes active if the total quantity of signal, which it receives exceeds its activation threshold, defined by the so-called activation function. If a node becomesactive, it emits a signal that is transmitted along the transmission channels up to the other unit to which it is connected. A connection point acts as a filter that converts the message into an inhibitory or excitatory signal increasing or decreasing the intensity, according to their individual characteristics. The connection points simulate the biological synapses and have the fundamental function to weigh the intensity of the transmitted signals, by multiplying them by the weights whose value depends on the connection itself. ANN schematic diagram Neural network architectures The way to connect the nodes, the total number of layers, that is, the levels of nodes between input and output, define the architecture of a neural network. For example, in a multilayer networks, one can identify the artificial neurons of layers such that: Each neuron is connected with all those of the next layer There are no connections between neurons belonging to the same layer The number of layers and of neurons per layer depends on the problem to be solved Now we start our exploration of neural network models, introducing the most simple neural network model: the Single Layer Perceptron or the so-called Rosenblatt's Perceptron. Single Layer Perceptron The Single Layer Perceptron was the first neural network model proposed in 1958 by Frank Rosenblatt. In this model, the content of the local memory of the neuron consists of a vector of weights, W = (w1, w2,……, wn). The computation is performed over the calculation of a sum of the input vector X =(x1, x2,……, xn), each of which is multiplied by the corresponding element of the vector of the weights; then the value provided in the output (that is, a weighted sum) will be the input of an activation function. This function returns 1 if the result is greater than a certain threshold, otherwise it returns -1. In the following figure, the activation function is the so-called sign function:         +1        x > 0 sign(x) =         −1        otherwise It is possible to use other activation functions, preferably non-linear (such as the sigmoid function that we will see in the next section). The learning procedure of the net is iterative: it slightly modifies for each learning cycle (called epoch) the synaptic weights by using a selected set called training set. At each cycle, the weights must be modified so as to minimize a cost function, which is specific to the problem under consideration. Finally, when the perceptron will be trained on the training set, it will be able to be tested on other inputs (the test set) in order to verify its capacity of generalization. Schema of a Rosemblatt's Perceptron Let's now see how to implement a single layer neural network for an image classification problem using TensorFlow. The logistic regression This algorithm has nothing to do with the canonical linear regression, but it is an algorithm that allows us to solve supervised classification problems. In fact to estimate the dependent variable, now we make use of the so-called logistic function or sigmoid. It is precisely because of this feature that we call this algorithm logistic regression.The sigmoid function has this pattern: As we can see, the dependent variable takes values strictly between 0 and 1 that is precisely what serves us. In the case of the logistic regression we want, then, that our function tell us what's the probability of belonging to a particular element of our class. We recall again that the supervised learning by the neural network is configured as an iterative process of optimization of the weights; these are then modified on the basis of the network's performance of the training set. Indeed the aim is to the loss function which indicates the degree to which the behavior of the network deviates from the desired one. The performance of the network is then verified on a test set, consisting of images other than those of train. The basic steps of training that we're going to implement are as follows: The weights are initialized with random values at the beginning of the training. For each element of the training set is calculated the error, that is, the difference between the desired output and the actual output. This error is used to adjust the weights The process is repeated resubmitting to the network, in a random order, all the examples of the training set until the error made on the entire training set is not less than a certain threshold or until the number of iterations are over. Let's now see in detail how to implement the logistic regression with TensorFlow. The problem we want to solve is yet to classify images from the MNIST dataset. The TensorFlow implementation First of all, we have to import all the necessary libraries: import input_data import tensorflow as tf import matplotlib.pyplot as plt We use the input_data.readfunction, to upload the images to our problem: mnist = input_data.read_data_sets("/tmp/data/", one_hot=True) Then we set the total number of epochs for the training phase: training_epochs = 25 Also we must define other parameters necessary for the model building: learning_rate = 0.01 batch_size = 100 display_step = 1 Now we move to the construction of the model Building the model Define x as the input tensor, it represents the MNIST data image of shape 28 x 28 = 784 pixels x = tf.placeholder("float", [None, 784]) We recall that our problem consists in assigning a probability value for each of the possible classes of membership (the numbers from 0 to 9). At the end of this calculation, we will use a probability distribution, which gives us the value of what is confident with our prediction. So the output we're going to get will be an output tensor with 10 probabilities each one corresponding to a digit (of course the sum of probabilities must be one): y = tf.placeholder("float", [None, 10]) To assign probabilities to each image, we will use the so-called softmax activation function. The softmax function is specified in two main steps: Calculate the evidence that a certain image belongs to a particular class. Convert the evidence into probabilities of belonging to each of the 10 possible classes. To evaluate the evidence, we first define the weights input tensor asW: W = tf.Variable(tf.zeros([784, 10])) For a given image, we could evaluate the evidence for each class isimply multiplying the tensorWwith the input tensorx. Using TensorFlow, we should have something like this: evidence = tf.matmul(x, W) In general, the models include an extra parameter representing the bias that indicates a certain degree of uncertainty; in our case, the final formula for the evidence is: evidence = tf.matmul(x, W) + b It means that for everyi(from 0 to 9) we have aWimatrix elements784  (28 × 28), where each elementjof the matrix is multiplied by the correspondingcomponentjof the input image (784 parts) that is added and the corresponding bias elementbi. So to define the evidence, we must define the following tensor of biases: b = tf.Variable(tf.zeros([10])) The second step is finally to use thesoftmaxfunction to obtain the output vector of probabilities, namelyactivation: activation = tf.nn.softmax(tf.matmul(x, W) + b) The TensorFlow's functiontf.nn.softmaxprovides a probability based output from the input evidence tensor. Once we implement the model, we can proceed to specify the necessary code to find the W weights and biases b network through the iterative training algorithm. In each iteration, the training algorithm takes the training data, applies the neural network, and compares the result with the expected. In order to train our model and to know when we have a good one, we must know how to define the accuracy of our model. Our goal is to try to get valuesof parameters W and b that minimize the value of the metric that indicates how bad the model is. Different metrics calculate the degree of error between the desired output and output of the training data. A common measure of error is the mean squared error or the Squared Euclidean Distance. However, there are some research findings that suggest to use other metrics to a neural network like this. In this example, we use the so-called cross-entropy error function, it is defined as follows: cross_entropy = y*tf.lg(activation) In order to minimize the cross_entropy, we could use the following combination of tf.reduce_mean and tf.reduce_sum to build the cost function: cost = tf.reduce_mean          (-tf.reduce_sum            (cross_entropy, reduction_indices=1)) Then we must minimize it using the gradient descent optimization algorithm: optimizer = tf.train.GradientDescentOptimizer  (learning_rate).minimize(cost) Few lines of code to build a neural network model! Launching the session It's the moment to build the session and launch our neural net model.We fix these lists to visualize the training session: avg_set = [] epoch_set=[] Then we initialize the TensorFlow variables: init = tf.initialize_all_variables() Start the session: with tf.Session() as sess: sess.run(init) As explained, each epoch is a training cycle:     for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) Then we loop over all batches:         for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) Fit training using batch data: sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys}) Compute the average loss running the train_step function with the given image values (x) and the real output (y_): avg_cost += sess.run                         (cost, feed_dict={x: batch_xs,                                  y: batch_ys})/total_batch During the computation, we display a log per epoch step:         if epoch % display_step == 0:             print "Epoch:", '%04d' % (epoch+1), "cost=","{:.9f}".format(avg_cost)     print " Training phase finished" Let's get the accuracy of our mode.It is correct if the index with the highest y value is the same as in the real digit vector the mean of correct_prediction gives us the accuracy. We need to run the accuracy function with our test set (mnist.test). We use the keys images and labelsfor x and y_: correct_prediction = tf.equal                            (tf.argmax(activation, 1), tf.argmax(y, 1))       accuracy = tf.reduce_mean                        (tf.cast(correct_prediction, "float"))    print "MODEL accuracy:", accuracy.eval({x: mnist.test.images,                                       y: mnist.test.labels}) Test evaluation We have seen the training phase in the preceding sections; for each epoch we have printed the relative cost function: Python 2.7.10 (default, Oct 14 2015, 16:09:02) [GCC 5.2.1 20151010] on linux2 Type "copyright", "credits" or "license()" for more information. >>> ======================= RESTART ============================ >>> Extracting /tmp/data/train-images-idx3-ubyte.gz Extracting /tmp/data/train-labels-idx1-ubyte.gz Extracting /tmp/data/t10k-images-idx3-ubyte.gz Extracting /tmp/data/t10k-labels-idx1-ubyte.gz Epoch: 0001 cost= 1.174406662 Epoch: 0002 cost= 0.661956009 Epoch: 0003 cost= 0.550468774 Epoch: 0004 cost= 0.496588717 Epoch: 0005 cost= 0.463674555 Epoch: 0006 cost= 0.440907706 Epoch: 0007 cost= 0.423837747 Epoch: 0008 cost= 0.410590841 Epoch: 0009 cost= 0.399881751 Epoch: 0010 cost= 0.390916621 Epoch: 0011 cost= 0.383320325 Epoch: 0012 cost= 0.376767031 Epoch: 0013 cost= 0.371007620 Epoch: 0014 cost= 0.365922904 Epoch: 0015 cost= 0.361327561 Epoch: 0016 cost= 0.357258660 Epoch: 0017 cost= 0.353508228 Epoch: 0018 cost= 0.350164634 Epoch: 0019 cost= 0.347015593 Epoch: 0020 cost= 0.344140861 Epoch: 0021 cost= 0.341420144 Epoch: 0022 cost= 0.338980592 Epoch: 0023 cost= 0.336655581 Epoch: 0024 cost= 0.334488012 Epoch: 0025 cost= 0.332488823 Training phase finished As wesaw, during the training phase, the cost function is minimized.At the end of the test, we show how accurately the model is implemented: Model Accuracy: 0.9475 >>> Finally, using these lines of code, we could visualize the the training phase of the net: plt.plot(epoch_set,avg_set, 'o',  label='Logistic Regression Training phase') plt.ylabel('cost') plt.xlabel('epoch') plt.legend() plt.show() Training phase in logistic regression Summary In this article, we learned the implementation of artificial neural networks, Single Layer Perceptron, TensorFlow. We also learned how to build the model and launch the session.
Read more
  • 0
  • 0
  • 7135

article-image-spatial-analysis
Packt
07 Jul 2016
21 min read
Save for later

Spatial Analysis

Packt
07 Jul 2016
21 min read
In this article by Ron Vincent author of the book Learning ArcGIS Runtime SDK for .NET, we're going to learn about spatial analysis with ArcGIS Runtime. As with other parts of ArcGIS Runtime, we really need to understand how spatial analysis is set up and executed with ArcGIS Desktop/Pro and ArcGIS Server. As a result, we will first learn about spatial analysis within the context of geoprocessing. Geoprocessing is the workhorse of doing spatial analysis with Esri's technology. Geoprocessing is very similar to how you write code; in that you specify some input data, do some work on that input data, and then produce the desired output. The big difference is that you use tools that come with ArcGIS Desktop or Pro. In this article, we're going to learn how to use these tools, and how to specify their input, output, and other parameters from an ArcGIS Runtime app that goes well beyond what's available in the GeometryEngine tool. In summary, we're going to cover the following topics: Introduction to spatial analysis Introduction to geoprocessing Preparing for geoprocessing Using geoprocessing in runtime Online geoprocessing (For more resources related to this topic, see here.) Introducing spatial analysis Spatial analysis is a broad term that can mean many different things, depending on the kind of study to be undertaken, the tools to be used, and the methods of performing the analysis, and is even subject to the dynamics of the individuals involved in the analysis. In this section, we will look broadly at the kinds of analysis that are possible, so that you have some context as to what is possible with the ArcGIS platform. Spatial analysis can be divided into these five broad categories: Point patterns Surface analysis Areal data Interactivity Networks Point pattern analysis is the evaluation of the pattern or distribution of points in space. With ArcGIS, you can analyze point data using average nearest neighbor, central feature, mean center, and so on. For surface analysis, you can create surface models, and then analyze them using tools such as LOS, slope surfaces, viewsheds, and contours. With areal data (polygons), you can perform hotspot analysis, spatial autocorrelation, grouping analysis, and so on. When it comes to modeling interactivity, you can use tools in ArcGIS that allow you to do gravity modeling, location-allocation, and so on. Lastly, with Esri's technology you can analyze networks, such as finding the shortest path, generating drive-time polygons, origin-destination matrices, and many other examples. ArcGIS provides the ability to perform all of these kinds of analysis using a variety of tools. For example, here the areas in green are visible from the tallest building. Areas in red are not visible: This article will deal with what is important to understand is that the ArcGIS platform has the capability to help solve problems such as these: An epidemiologist collects data on a disease, such as Chronic Obstructive Pulmonary Disease (COPD), and wants to know where it occurs and whether there are any statistically significant clusters so that a mitigation plan can be developed A mining geologist wants to obtain samples of a precious mineral so that he/she can estimate the overall concentration of the mineral A military analyst or soldier wants to know where they can be located in the battlefield and not been seen A crime analyst wants to know where crimes are concentrated so that they can increase police presence as a deterrent A research scientist wants to develop a model to predict the path of a fire There are many more examples. With ArcGIS Desktop and Pro, along with the correct extension, questions can be posed and answered using a variety of techniques. However, it's important to understand that ArcGIS Runtime may or may not be a good fit and may or may not support certain tools. In many cases, spatial analysis would be best studied with ArcGIS Desktop or Pro. For example, if you plan to conduct hotspot analysis on patients or crime, doing this kind of operation with Desktop or Pro is best suited because it's typically something you do once. On the other hand, if you plan to allow users to repeat this process again and again with different data, and you need high performance, building a tool with ArcGIS Runtime will be the perfect solution, especially if they need to run the tool in the field. It should also be noted that, in some cases, the ArcGIS JavaScript API will also be better suited. Introducing geoprocessing If you open up the Geoprocessing toolbox in ArcGIS Desktop or Pro, you will find dozens of tools categorized in the following manner: With these tools, you can build sophisticated models by using ModelBuilder or Python, and then publish them to ArcGIS Server. For example, to perform a buffer with the GeometryEngine tool, you would drag the Buffer tool onto the ModelBuilder canvas, as shown here, and specify its inputs and outputs: This model specifies an input (US cities), performs an operation (Buffer the cities), and then produces an output (Buffered cities). Conceptually, this is programming except that the algorithm is built graphically instead of with code. You may be asking: Why would you use this tool in ArcGIS Desktop or Pro? Good question. Well, ArcGIS Runtime only comes with a few selected tools in GeometryEngine. These tools, such as the buffer method in GeometryEngine, are so common that Esri decided to include them with ArcGIS Runtime so that these kinds of operation could be performed on the client without having to call the server. On the other hand, in order to keep the core of ArcGIS Runtime lightweight, Esri wanted to provide these tools and many more, but make them available as tools that you need to call on when required for special or advanced analysis. As a result, if your app needs basic operations, GeometryEngine may provide what you need. On the other hand, if you need to perform more sophisticated operations, you will need to build the model with Desktop or Pro, published it to Server, and then consume the resulting service with ArcGIS Runtime. The rest of this article will show you how to consume a geoprocessing model using this pattern. Preparing for geoprocessing To perform geoprocessing, you will need to create a model with ModelBuilder and/or Python. For more details on how to create models using ModelBuilder, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/modelbuilder/what-is-modelbuilder-.htm. To build a model with Python, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/basics/python-and-geoprocessing.htm. Once you've created a model with ModelBuilder or Python, you will then need to run the tool to ensure that it works and to make it so that it can be published as a geoprocessing service for online use, or as a geoprocessing package for offline use. See here for publishing a service: http://server.arcgis.com/en/server/latest/publish-services/windows/a-quick-tour-of-publishing-a-geoprocessing-service.htm If you plan to use geoprocessing offline, you'll need to publish a geoprocessing package (*.gpk) file. You can learn more about these at https://desktop.arcgis.com/en/desktop/latest/analyze/sharing-workflows/a-quick-tour-of-geoprocessing-packages.htm. Once you have a geoprocessing service or package, you can now consume it with ArcGIS Runtime. In the sections that follow, we will use classes from Esri.ArcGISRuntime.Tasks.Geoprocessing that allow us to consume these geoprocessing services or packages. Online geoprocessing with ArcGIS Runtime Once you have created a geoprocessing model, you will want to access it from ArcGIS Runtime. In this section, we're going to do surface analysis from an online service that Esri has published. To accomplish this, you will need to access the REST endpoint by typing in the following URL: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer When you open this page, you'll notice the description and that it has a list of Tasks: A task is a REST child resource of a geoprocessing service. A geoprocessing service can have one or more tasks associated with it. A task requires a set of inputs in the form of parameters. Once the task completes, it will produce some output that you will then use in your app. The output could be a map service, a single value, or even a report. This particular service only has one task associated with it and it is called Viewshed. If you click on the task called Viewshed, you'll be taken to this page: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer/Viewshed. This service will produce a viewshed of where the user clicks that looks something like this: The user clicks on the map (X) and the geoprocessing task produces a viewshed, which shows all the areas on the surface that are visible to an observer, as if they were standing on the surface. Once you click on the task, you'll note the concepts marked in the following screenshot: As you can see, beside the red arrows, the geoprocessing service lets you know what is required for it to operate, so let's go over each of these: First, the service lets you know that it is a synchronous geoprocessing service. A synchronous geoprocessing task will run synchronously until it has completed, and block the calling thread. An asynchronous geoprocessing task will run asynchronously, but it won't block the calling thread. The next pieces of information you'll need to provide to the task are the parameters. In the preceding example, the task requires Input_Observation_Point. You will need to provide this exact name when providing the parameter later on, when we write the code to pass in this parameter. Also, note that the Direction value is esriGPParameterDirectionInput. This tells you that the task expects that Input_Observation_Point is an input to the model. Lastly, note that the Parameter Type value is Required. In other words, you must provide the task with this parameter in order for it to run. It's also worth noting that Default Value is an esriGeometryPoint type, which in ArcGIS Runtime is MapPoint. The Spatial Reference value of the point is 540003. If you investigate the remaining required parameters, you'll note that they require a Viewshed_Distance parameter. Now, refer to the following screenshot. If you don't specify a value, it will use Default Value of 15,000 meters. Lastly, this task will output a Viewshed_Result parameter, which is esriGeometryPolygon. Using this polygon, we can then render to the map or scene. Geoprocessing synchronously Now that you've seen an online service, let's look at how we call this service using ArcGIS Runtime. To execute the preceding viewshed task, we first need to create an instance of the geoprocessor object. The geoprocessor object requires a URL down to the task level in the REST endpoint, like this: private const string viewshedServiceUrl = "http://sampleserver6.arcgisonline.com/arcgis/rest/services/ Elevation/ESRI_Elevation_World/GPServer/Viewshed"; private Geoprocessor gpTask; Note that we've attached /Viewshed on the end of the original URL so that we can pass in the completed path to the task. Next, you will then instantiate the geoprocessor in your app, using the URL to the task: gpTask = new Geoprocessor(new Uri(viewshedServiceUrl)); Once we have created the geoprocessor, we can then prompt the user to click somewhere on the map. Let's look at some code: public async void CreateViewshed() { // // get a point from the user var mapPoint = await this.mapView.Editor.RequestPointAsync(); // clear the graphics layers this.viewshedGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add(new Graphic{ Geometry = mapPoint, Symbol = this.sms }); // specify the input parameters var parameter = new GPInputParameter() { OutSpatialReference = SpatialReferences.WebMercator }; parameter.GPParameters.Add(new GPFeatureRecordSetLayer("Input_Observation_Point", mapPoint)); parameter.GPParameters.Add(new GPLinearUnit("Viewshed_Distance", LinearUnits.Miles, this.distance)); // Send to the server this.Status = "Processing on server..."; var result = await gpTask.ExecuteAsync(parameter); if (result == null || result.OutParameters == null || !(result.OutParameters[0] is GPFeatureRecordSetLayer)) throw new ApplicationException("No viewshed graphics returned for this start point."); // process the output this.Status = "Finished processing. Retrieving results..."; var viewshedLayer = result.OutParameters[0] as GPFeatureRecordSetLayer; var features = viewshedLayer.FeatureSet.Features; foreach (Feature feature in features) { this.viewshedGraphicsLayer.Graphics.Add(feature as Graphic); } this.Status = "Finished!!"; } The first thing we do is have the user click on the map and return MapPoint. We then clear a couple of GraphicsLayers that hold the input graphic and viewshed graphics, so that the map is cleared every time they run this code. Next, we create a graphic using the location where the user clicked. Now comes the interesting part of this. We need to provide the input parameters for the task and we do that with GPInputParameter. When we instantiate GPInputParameter, we also need to specify the output spatial reference so that the data is rendered in the spatial reference of the map. In this example, we're using the map's spatial reference. Then, we add the input parameters. Note that we've spelled them exactly as the task required them. If we don't, the task won't work. We also learned earlier that this task requires a distance, so we use GPLinearUnit in Miles. The GPLinearUnit class lets the geoprocessor know what kinds of unit to accept. After the input parameters are set up, we then call ExecuteAsync. We are calling this method because this is a synchronous geoprocessing task. Even though this method has Async on the end of it, this applies to .NET, not ArcGIS Server. The alternative to ExecuteAsync is SubmitJob, which we will discuss shortly. After some time, the result comes back and we grab the results using result.OutParameters[0]. This contains the output from the geoprocessing task and we want to use that to then render the output to the map. Thankfully, it returns a read-only set of polygons, which we can then add to GraphicsLayer. If you don't know which parameter to use, you'll need to look it up on the task's page. In the preceding example, the parameter was called Viewshed_Distance and the Data Type value was GPLinearUnit. ArcGIS Runtime comes with a variety of data types to match the corresponding data type on the server. The other supported types are GPBoolean, GPDataFile, GPDate, GPDouble, GPItemID, GPLinearUnit, GPLong, GPMultiValue<T>, GPRasterData, GPRecordSet, and GPString. Instead of manually inspecting a task as we did earlier, you can also use Geoprocessor.GetTaskInfoAsync to discover all of the parameters. This is a useful object if you want to provide your users with the ability to specify any geoprocessing task dynamically while the app is running. For example, if your app requires that users are able to enter any geoprocessing task, you'll need to inspect that task, obtain the parameters, and then respond dynamically to the entered geoprocessing task. Geoprocessing asynchronously So far we've called a geoprocessing task synchronously. In this section, we'll cover how to call a geoprocessing task asynchronously. There are two differences when calling a geoprocessing task asynchronously: You will run the task by executing a method called SubmitJobAsync instead of ExecuteAsync. The SubmitJobAsync method is ideal for long-running tasks, such as performing data processing on the server. The major advantage of SubmitJobAsync is that users can continue working while the task works in the background. When the task is completed, the results will be presented. You will need to check the status of the task with GPJobStatus so that users can get a sense of whether the task is working as expected. To do this, check GPJobStatus periodically and it will return GPJobStatus. The GPJobStatus enumeration has the following values: New, Submitted, Waiting, Executing, Succeeded, Failed, TimedOut, Cancelling, Cancelled, Deleting, or Deleted. With these enumerations, you can poll the server and return the status using CheckJobStatusAsync on the task and present that to the user while they wait for the geoprocessor. Let's take a look at this process in the following diagram: As you can see in the preceding diagram, the input parameters are specified as we did earlier with the synchronous task, the Geoprocessor object is set up, and then SubmitJobAsync is called with the parameters (GOInputParameter). Once the task begins, we then have to check the status of it using the results from SubmitJobAsync. We then use CheckJobStatusAsync on the task to return the status enumeration. If it indicates Succeeded, we then do something with the results. If not, we continue to check the status using any time period we specify. Let's try this out using an example service from Esri that allows for areal analysis. Go to the following REST endpoint: http://serverapps10.esri.com/ArcGIS/rest/services/SamplesNET/USA_Data_ClipTools/GPServer/ClipCounties. In the service, you will note that it's called ClipCounties. This is a rather contrived example, but it shows how to do server-side data processing. It requires two parameters called Input_Features and Linear_unit. It outputs output_zip and Clipped _Counties. Basically, this task allows you to drag a line on the map; it will then buffer it and clip out the counties in the U.S. and show them on the map, like so: We are interested in two methods in this sample app. Let's take a look at them: public async void Clip() { //get the user's input line var inputLine = await this.mapView.Editor.RequestShapeAsync( DrawShape.Polyline) as Polyline; // clear the graphics layers this.resultGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add( new Graphic { Geometry = inputLine, Symbol = this.simpleInputLineSymbol }); // add the parameters var parameter = new GPInputParameter(); parameter.GPParameters.Add( new GPFeatureRecordSetLayer("Input_Features", inputLine)); parameter.GPParameters.Add(new GPLinearUnit( "Linear_unit", LinearUnits.Miles, this.Distance)); // poll the task var result = await SubmitAndPollStatusAsync(parameter); // add successful results to the map if (result.JobStatus == GPJobStatus.Succeeded) { this.Status = "Finished processing. Retrieving results..."; var resultData = await gpTask.GetResultDataAsync(result.JobID, "Clipped_Counties"); if (resultData is GPFeatureRecordSetLayer) { GPFeatureRecordSetLayer gpLayer = resultData as GPFeatureRecordSetLayer; if (gpLayer.FeatureSet.Features.Count == 0) { // the the map service results var resultImageLayer = await gpTask.GetResultImageLayerAsync( result.JobID, "Clipped_Counties"); // make the result image layer opaque GPResultImageLayer gpImageLayer = resultImageLayer; gpImageLayer.Opacity = 0.5; this.mapView.Map.Layers.Add(gpImageLayer); this.Status = "Greater than 500 features returned. Results drawn using map service."; return; } // get the result features and add them to the // GraphicsLayer var features = gpLayer.FeatureSet.Features; foreach (Feature feature in features) { this.resultGraphicsLayer.Graphics.Add( feature as Graphic); } } this.Status = "Success!!!"; } } This Clip method first asks the user to add a polyline to the map. It then clears the GraphicsLayer class, adds the input line to the map in red, sets up GPInputParameter with the required parameters (Input_Featurs and Linear_unit), and calls a method named SubmitAndPollStatusAsync using the input parameters. Let's take a look at that method too: // Submit GP Job and Poll the server for results every 2 seconds. private async Task<GPJobInfo> SubmitAndPollStatusAsync(GPInputParameter parameter) { // Submit gp service job var result = await gpTask.SubmitJobAsync(parameter); // Poll for the results async while (result.JobStatus != GPJobStatus.Cancelled && result.JobStatus != GPJobStatus.Deleted && result.JobStatus != GPJobStatus.Succeeded && result.JobStatus != GPJobStatus.TimedOut) { result = await gpTask.CheckJobStatusAsync(result.JobID); foreach (GPMessage msg in result.Messages) { this.Status = string.Join(Environment.NewLine, msg.Description); } await Task.Delay(2000); } return result; } The SubmitAndPollStatusAsync method submits the geoprocessing task and the polls it every two seconds to see if it hasn't been Cancelled, Deleted, Succeeded, or TimedOut. It calls CheckJobStatusAsync, gets the messages of type GPMessage, and adds them to the property called Status, which is a ViewModel property with the current status of the task. We then effectively check the status of the task every 2 seconds with Task.Delay(2000) and continue doing this until something happens other than the GPJobStatus enumerations we're checking for. Once SubmitAndPollStatusAsync has succeeded, we then return to the main method (Clip) and perform the following steps with the results: We obtain the results with GetResultDataAsync by passing in the results of JobID and Clipped_Counties. The Clipped_Counties instance is an output of the task, so we just need to specify the name Clipped_Counties. Using the resulting data, we first check whether it is a GPFeatureRecordSetLayer type. If it is, we then do some more processing on the results. We then do a cast just to make sure we have the right object (GPFeatureRecordsSetLayer). We then check to see if no features were returned from the task. If none were returned, we perform the following steps: We obtain the resulting image layer using GetResultImageLayerAsync. This returns a map service image of the results. We then cast this to GPResultImageLayer and set its opacity to 0.5 so that we can see through it. If the user enters in a large distance, a lot of counties are returned, so we convert the layer to a map image, and then show them the entire country so that they can see what they've done wrong. Having the result as an image is faster than displaying all of the polygons as the JSON objects. Add GPResultImageLayer to the map. If everything worked according to plan, we get only the features needed and add them to GraphicsLayer. That was a lot of work, but it's pretty awesome that we sent this off to ArcGIS Server and it did some heavy processing for us so that we could continue working with our map. The geoprocessing task took in a user-specified line, buffered it, and then clipped out the counties in the U.S. that intersected with that buffer. When you run the project, make sure you pan or zoom around while the task is running so that you can see that you can still work. You could also further enhance this code to zoom to the results when it finishes. There are some other pretty interesting capabilities that we need to discuss with this code, so let's delve a little deeper. Working with the output results Let's discuss the output of the geoprocessing results in a little more detail in this section. GPMesssage The GPMessage object is very helpful because it can be used to check the types of message that are coming back from Server. It contains different kinds of message via an enumeration called GPMessageType, which you can use to further process the message. GPMessageType returns an enumeration of Informative, Warning, Error, Abort, and Empty. For example, if the task failed, GPMessageType.Error will be returned and you can present a message to the user letting them know what happened and what they can do to resolve this issue. The GPMessage object also returns Description, which we used in the preceding code to display to the user as the task executed. The level of messages returned by Server dictates what messages are returned by the task. See Message Level here: If the Message Level field is set to None, no messages will be returned. When testing a geoprocessing service, it can be helpful to set the service to Info because it produces detailed messages. GPFeatureRecordSetLayer The preceding task expected an output of features, so we cast the result to GPFeatureRecordsSetLayer. The GPFeatureRecordsSetLayer object is a layer type which handles the JSON objects returned by the server, which we can then use to render on the map. GPResultMapServiceLayer When a geoprocessing service is created, you have the option of making it produce an output map service result with its own symbology. Refer to http://server.arcgis.com/en/server/latest/publish-services/windows/defining-output-symbology-for-geoprocessing-tasks.htm. You can take the results of a GPFeatureRecordsSetLayer object and access this map service using the following URL format: http://catalog-url/resultMapServiceName/MapServer/jobs/jobid Using JobID, which was produced by SubmitJobAsync, you can add the result to the map like so: ArcGISDynamicMapServiceLayer dynLayer = this.gpTask.GetResultMapServiceLayer(result.JobID); this.mapView.Map.Layers.Add(dynLayer); Summary In this article, we went over spatial analysis at a high level, and then went into the details of how to do spatial analysis with ArcGIS Runtime. We discussed how to create models with ModelBuilder and/or Python, and then went on to show how to use geoprocessing, both synchronously and asynchronously, with online and offline tasks. With this information, you now have a multitude of options for adding a wide variety of analytical tools to your apps. Resources for Article: Further resources on this subject: Building Custom Widgets [article] Learning to Create and Edit Data in ArcGIS [article] ArcGIS – Advanced ArcObjects [article]
Read more
  • 0
  • 0
  • 4486

article-image-recommendation-systems
Packt
07 Jul 2016
12 min read
Save for later

Recommendation Systems

Packt
07 Jul 2016
12 min read
 In this article, Pradeepta Mishra, the author of R Data Mining Blueprints, says that in this age of Internet, everything available over the Internet is not useful for everyone. Different companies and entities use different approaches in finding out relevant content for their audiences. People started building algorithms to construct relevance score, based on that, recommendation can be build and suggested to the users. From our day to day life, every time I see an image on Google, 3-4 other images are recommended to me by Google. Every time I look for some videos on YouTube, 10 more videos are recommended to me. Every time I visit Amazon to buy some products, 5-6 products are recommended to me. And every time I read one blog or article, a few more articles and blogs are recommended to me. This is an evidence of algorithmic forces at play to recommend certain things based on users’ preferences or choices, since the users’ time is precious and content available over the Internet is unlimited. Hence, a recommendation engine helps organizations customize their offerings based on user preferences so that the user need not have to spend time in exploring what is required. In this article, the reader will learn the implementation of product recommendation using R. (For more resources related to this topic, see here.) Practical project The dataset contains a sample of 5000 users from the anonymous ratings data from the Jester Online Joke Recommender System collected between April 1999 and May 2003 (Golberg, Roeder, Gupta, and Perkins 2001). The dataset contains ratings for 100 jokes on a scale from -10 to 10. All users in the dataset have rated 36 or more jokes. Let's load the recommenderlab library and the Jester5K dataset: > library("recommenderlab") > data(Jester5k) > Jester5k@data@Dimnames[2] [[1]] [1] "j1" "j2" "j3" "j4" "j5" "j6" "j7" "j8" "j9" [10] "j10" "j11" "j12" "j13" "j14" "j15" "j16" "j17" "j18" [19] "j19" "j20" "j21" "j22" "j23" "j24" "j25" "j26" "j27" [28] "j28" "j29" "j30" "j31" "j32" "j33" "j34" "j35" "j36" [37] "j37" "j38" "j39" "j40" "j41" "j42" "j43" "j44" "j45" [46] "j46" "j47" "j48" "j49" "j50" "j51" "j52" "j53" "j54" [55] "j55" "j56" "j57" "j58" "j59" "j60" "j61" "j62" "j63" [64] "j64" "j65" "j66" "j67" "j68" "j69" "j70" "j71" "j72" [73] "j73" "j74" "j75" "j76" "j77" "j78" "j79" "j80" "j81" [82] "j82" "j83" "j84" "j85" "j86" "j87" "j88" "j89" "j90" [91] "j91" "j92" "j93" "j94" "j95" "j96" "j97" "j98" "j99" [100] "j100" The following image shows the distribution of real ratings given by 2000 users. > data<-sample(Jester5k,2000) > hist(getRatings(data),breaks=100,col="blue") The input dataset contains the individual ratings; the normalization function reduces the individual rating bias by centering the row (which is a standard z-score transformation), subtracting each element from the mean, and then dividing by standard deviation. The following graph shows normalized ratings for the preceding dataset: > hist(getRatings(normalize(data)),breaks=100,col="blue4") To create a recommender system: A recommendation engine is created using the recommender() function. A new recommendation algorithm can be added by the user using the recommenderRegistry$get_entries() function: > recommenderRegistry$get_entries(dataType = "realRatingMatrix") $IBCF_realRatingMatrix Recommender method: IBCF Description: Recommender based on item-based collaborative filtering (real data). Parameters: k method normalize normalize_sim_matrix alpha na_as_zero minRating 1 30 Cosine center FALSE 0.5 FALSE NA $POPULAR_realRatingMatrix Recommender method: POPULAR Description: Recommender based on item popularity (real data). Parameters: None $RANDOM_realRatingMatrix Recommender method: RANDOM Description: Produce random recommendations (real ratings). Parameters: None $SVD_realRatingMatrix Recommender method: SVD Description: Recommender based on SVD approximation with column-mean imputation (real data). Parameters: k maxiter normalize minRating 1 10 100 center NA $SVDF_realRatingMatrix Recommender method: SVDF Description: Recommender based on Funk SVD with gradient descend (real data). Parameters: k gamma lambda min_epochs max_epochs min_improvement normalize 1 10 0.015 0.001 50 200 1e-06 center minRating verbose 1 NA FALSE $UBCF_realRatingMatrix Recommender method: UBCF Description: Recommender based on user-based collaborative filtering (real data). Parameters: method nn sample normalize minRating 1 cosine 25 FALSE center NA The preceding registry command helps in identifying the methods available in the recommenderlab parameters for the model. There are six different methods for implementing recommender systems, such as popular, item-based, user-based, PCA, random, and SVD. Let's start the recommendation engine using the popular method: > rc <- Recommender(Jester5k, method = "POPULAR") > rc Recommender of type 'POPULAR' for 'realRatingMatrix' learned using 5000 users. > names(getModel(rc)) [1] "topN" "ratings" [3] "minRating" "normalize" [5] "aggregationRatings" "aggregationPopularity" [7] "minRating" "verbose" > getModel(rc)$topN Recommendations as 'topNList' with n = 100 for 1 users. The objects such as top N, verbose, aggregation popularity, and so on, can be printed using names of the getmodel()command: recom <- predict(rc, Jester5k, n=5) recom To generate a recommendation, we can use the predict function against the same dataset and validate the accuracy of the predictive model. Here we are generating the top 5 recommended jokes to each of the users. The result of the prediction is as follows: > head(as(recom,"list")) $u2841 [1] "j89" "j72" "j76" "j88" "j83" $u15547 [1] "j89" "j93" "j76" "j88" "j91" $u15221 character(0) $u15573 character(0) $u21505 [1] "j89" "j72" "j93" "j76" "j88" $u15994 character(0) For the same Jester5K dataset, let's try to implement item-based collaborative filtering (IBCF): > rc <- Recommender(Jester5k, method = "IBCF") > rc Recommender of type 'IBCF' for 'realRatingMatrix' learned using 5000 users. > recom <- predict(rc, Jester5k, n=5) > recom Recommendations as 'topNList' with n = 5 for 5000 users. > head(as(recom,"list")) $u2841 [1] "j85" "j86" "j74" "j84" "j80" $u15547 [1] "j91" "j87" "j88" "j89" "j93" $u15221 character(0) $u15573 character(0) $u21505 [1] "j78" "j80" "j73" "j77" "j92" $u15994 character(0) The Principal component analysis (PCA) method is not applicable for real-rating-based datasets; this is because getting a correlation matrix and subsequent eigenvector and eigenvalue calculations would not be accurate. Hence we will not show its application. Next we are going to show how the random method works: > rc <- Recommender(Jester5k, method = "RANDOM") > rc Recommender of type 'RANDOM' for 'ratingMatrix' learned using 5000 users. > recom <- predict(rc, Jester5k, n=5) > recom Recommendations as 'topNList' with n = 5 for 5000 users. > head(as(recom,"list")) [[1]] [1] "j90" "j74" "j86" "j78" "j85" [[2]] [1] "j87" "j88" "j74" "j92" "j79" [[3]] character(0) [[4]] character(0) [[5]] [1] "j95" "j86" "j93" "j78" "j83" [[6]] character(0) In the recommendation engine, the SVD approach is used to predict the missing ratings so that a recommendation can be generated. Using the singular value decomposition (SVD) method, the following recommendation can be generated: > rc <- Recommender(Jester5k, method = "SVD") > rc Recommender of type 'SVD' for 'realRatingMatrix' learned using 5000 users. > recom <- predict(rc, Jester5k, n=5) > recom Recommendations as 'topNList' with n = 5 for 5000 users. > head(as(recom,"list")) $u2841 [1] "j74" "j71" "j84" "j79" "j80" $u15547 [1] "j89" "j93" "j76" "j81" "j88" $u15221 character(0) $u15573 character(0) $u21505 [1] "j80" "j73" "j100" "j72" "j78" $u15994 character(0) The result from user-based collaborative filtering is shown as follows: > rc <- Recommender(Jester5k, method = "UBCF") > rc Recommender of type 'UBCF' for 'realRatingMatrix' learned using 5000 users. > recom <- predict(rc, Jester5k, n=5) > recom Recommendations as 'topNList' with n = 5 for 5000 users. > head(as(recom,"list")) $u2841 [1] "j81" "j78" "j83" "j80" "j73" $u15547 [1] "j96" "j87" "j89" "j76" "j93" $u15221 character(0) $u15573 character(0) $u21505 [1] "j100" "j81" "j83" "j92" "j96" $u15994 character(0) Now let's compare the results obtained from all the five different algorithms except PCA (because PCA requires a binary dataset; it does not accept a real ratings matrix). Table 4: Comparison of results between different recommendation algorithms Popular IBCF Random method SVD UBCF > head(as(recom,"list")) > head(as(recom,"list")) > head(as(recom,"list")) > head(as(recom,"list")) > head(as(recom,"list")) $u2841 $u2841 [[1]] $u2841 $u2841 [1] "j89" "j72" "j76" "j88" "j83" [1] "j85" "j86" "j74" "j84" "j80" [1] "j90" "j74" "j86" "j78" "j85" [1] "j74" "j71" "j84" "j79" "j80" [1] "j81" "j78" "j83" "j80" "j73"           $u15547 $u15547 [[2]] $u15547 $u15547 [1] "j89" "j93" "j76" "j88" "j91" [1] "j91" "j87" "j88" "j89" "j93" [1] "j87" "j88" "j74" "j92" "j79" [1] "j89" "j93" "j76" "j81" "j88" [1] "j96" "j87" "j89" "j76" "j93"           $u15221 $u15221 [[3]] $u15221 $u15221 character(0) character(0) character(0) character(0) character(0)           $u15573 $u15573 [[4]] $u15573 $u15573 character(0) character(0) character(0) character(0) character(0)           $u21505 $u21505 [[5]] $u21505 $u21505 [1] "j89" "j72" "j93" "j76" "j88" [1] "j78" "j80" "j73" "j77" "j92" [1] "j95" "j86" "j93" "j78" "j83" [1] "j80"   "j73" "j100" "j72" "j78" [1] "j100" "j81" "j83" "j92" "j96"           $u15994 $u15994 [[6]] $u15994 $u15994 character(0) character(0) character(0) character(0) character(0)             One thing is clear from the above table. For users 15573 and 15221, none of the five methods generate recommendation. Hence it is important to look at methods to evaluate the recommendation results. To validate the accuracy of the model, let's implement accuracy measures and compare the accuracies of all the models. For the evaluation of the model results, the dataset is divided into 90% for training and 10% for testing the algorithm. The definition of a good rating is updated as 5: > e <- evaluationScheme(Jester5k, method="split", + train=0.9,given=15, goodRating=5) > e Evaluation scheme with 15 items given Method: 'split' with 1 run(s). Training set proportion: 0.900 Good ratings: >=5.000000 Data set: 5000 x 100 rating matrix of class 'realRatingMatrix' with 362106 ratings. The following script is used to build the collaborative filtering model and apply it on a new dataset for predicting the ratings. Then the prediction accuracy is computed. The error matrix is shown as follows: > #User based collaborative filtering > r1 <- Recommender(getData(e, "train"), "UBCF") > #Item based collaborative filtering > r2 <- Recommender(getData(e, "train"), "IBCF") > #PCA based collaborative filtering > #r3 <- Recommender(getData(e, "train"), "PCA") > #POPULAR based collaborative filtering > r4 <- Recommender(getData(e, "train"), "POPULAR") > #RANDOM based collaborative filtering > r5 <- Recommender(getData(e, "train"), "RANDOM") > #SVD based collaborative filtering > r6 <- Recommender(getData(e, "train"), "SVD") > #Predicted Ratings > p1 <- predict(r1, getData(e, "known"), type="ratings") > p2 <- predict(r2, getData(e, "known"), type="ratings") > #p3 <- predict(r3, getData(e, "known"), type="ratings") > p4 <- predict(r4, getData(e, "known"), type="ratings") > p5 <- predict(r5, getData(e, "known"), type="ratings") > p6 <- predict(r6, getData(e, "known"), type="ratings") > #calculate the error between the prediction and > #the unknown part of the test data > error <- rbind( + calcPredictionAccuracy(p1, getData(e, "unknown")), + calcPredictionAccuracy(p2, getData(e, "unknown")), + #calcPredictionAccuracy(p3, getData(e, "unknown")), + calcPredictionAccuracy(p4, getData(e, "unknown")), + calcPredictionAccuracy(p5, getData(e, "unknown")), + calcPredictionAccuracy(p6, getData(e, "unknown")) + ) > rownames(error) <- c("UBCF","IBCF","POPULAR","RANDOM","SVD") > error RMSE MSE MAE UBCF 4.485571 20.12034 3.511709 IBCF 4.606355 21.21851 3.466738 POPULAR 4.509973 20.33985 3.548478 RANDOM 7.917373 62.68480 6.464369 SVD 4.653111 21.65144 3.679550 From the preceding result, UBCF has the lowest error in comparison to other recommendation methods. Here, to evaluate the results of the predictive model, we are using the k-fold cross-validation method. k is assumed to have been taken as 4: > #Evaluation of a top-N recommender algorithm > scheme <- evaluationScheme(Jester5k, method="cross", k=4, + given=3,goodRating=5) > scheme Evaluation scheme with 3 items given Method: 'cross-validation' with 4 run(s). Good ratings: >=5.000000 Data set: 5000 x 100 rating matrix of class 'realRatingMatrix' with 362106 ratings. The result of the models from the evaluation scheme shows the runtime versus prediction time by different cross-validation results for different models. The result is shown as follows: > results <- evaluate(scheme, method="POPULAR", n=c(1,3,5,10,15,20)) POPULAR run fold/sample [model time/prediction time] 1 [0.14sec/2.27sec] 2 [0.16sec/2.2sec] 3 [0.14sec/2.24sec] 4 [0.14sec/2.23sec] > results <- evaluate(scheme, method="IBCF", n=c(1,3,5,10,15,20)) IBCF run fold/sample [model time/prediction time] 1 [0.4sec/0.38sec] 2 [0.41sec/0.37sec] 3 [0.42sec/0.38sec] 4 [0.43sec/0.37sec] > results <- evaluate(scheme, method="UBCF", n=c(1,3,5,10,15,20)) UBCF run fold/sample [model time/prediction time] 1 [0.13sec/6.31sec] 2 [0.14sec/6.47sec] 3 [0.15sec/6.21sec] 4 [0.13sec/6.18sec] > results <- evaluate(scheme, method="RANDOM", n=c(1,3,5,10,15,20)) RANDOM run fold/sample [model time/prediction time] 1 [0sec/0.27sec] 2 [0sec/0.26sec] 3 [0sec/0.27sec] 4 [0sec/0.26sec] > results <- evaluate(scheme, method="SVD", n=c(1,3,5,10,15,20)) SVD run fold/sample [model time/prediction time] 1 [0.36sec/0.36sec] 2 [0.35sec/0.36sec] 3 [0.33sec/0.36sec] 4 [0.36sec/0.36sec] The confusion matrix displays the level of accuracy provided by each of the models. We can estimate the accuracy measures such as precision, recall and TPR, FPR, and so on; the result is shown here: > getConfusionMatrix(results)[[1]] TP FP FN TN precision recall TPR FPR 1 0.2736 0.7264 17.2968 78.7032 0.2736000 0.01656597 0.01656597 0.008934588 3 0.8144 2.1856 16.7560 77.2440 0.2714667 0.05212659 0.05212659 0.027200530 5 1.3120 3.6880 16.2584 75.7416 0.2624000 0.08516269 0.08516269 0.046201487 10 2.6056 7.3944 14.9648 72.0352 0.2605600 0.16691259 0.16691259 0.092274243 15 3.7768 11.2232 13.7936 68.2064 0.2517867 0.24036802 0.24036802 0.139945095 20 4.8136 15.1864 12.7568 64.2432 0.2406800 0.30082509 0.30082509 0.189489883 Association rules as a method for recommendation engine, for building product recommendation in a retail/e-commerce scenario. Summary In this article, we discussed the way of recommending products to users based on similarities in their purchase patterns, content, item-to-item comparison and so on. So far, the accuracy is concerned, always the user-based collaborative filtering is giving better result in a real-rating-based matrix as an input. Similarly, the choice of methods for a specific use case is really difficult, so it is recommended to apply all six different methods. The best one should be selected automatically, and the recommendation should also get updates automatically. Resources for Article: Further resources on this subject: Data mining[article] Machine Learning with R[article] Machine learning and Python – the Dream Team[article]
Read more
  • 0
  • 0
  • 11478

article-image-packaging-game
Packt
07 Jul 2016
13 min read
Save for later

Packaging the Game

Packt
07 Jul 2016
13 min read
A game is not just an art, code, and a game design packaged within an executable. You have to deal with stores, publishers, ratings, console providers, and making assets and videos for stores and marketing, among other minor things required to fully ship a game. This article by Muhammad A.Moniem, author of the book Mastering Unreal Engine 4.X, will take care of the last steps you need to do within the Unreal environment in order to get this packaged executable fine and running. Anything post-Unreal, you need to find a way to do it, but from my seat, I'm telling you have done the complex, hard, huge, and long part; what comes next is a lot simpler! This article will help us understand and use Unreal's Project Launcher, patching the project and creating DLCs (downloadable content) (For more resources related to this topic, see here.) Project Launcher and DLCs The first and the most important thing you have to keep in mind is that the project launcher is still in development and the process of creating DLCs is not final yet, and might get changed in the future with upcoming engine releases. While writing this book I've been using Unreal 4.10 and testing everything I do and write within the Unreal 4.11 preview version, and yet still the DLC process remains experimental. So be advised that you might find it a little different in the future as the engine evolves: While we have packaged the game previously through the File menu using Packaging Project, there is another, more detailed, more professional way to do the same job. Using the Project Launcher, which comes in the form of a separate app with Unreal (Unreal Frontend), you have the choice to run it directly from the editor. You can access the Project Launcher from the Windows menu, and then choose Project Launcher, and that will launch it right away. However, I have a question here. Why would you go through these extra steps, then just do the packaging process in one click? Well, extensibility is the answer. Using the Unreal Project Launcher allows you to create several profiles, each profile having a different build setting, and later you can fire each build whenever you need it; not only that, but the profiles could be made for different projects, which means you can have an already made setup for all your projects with all the different build configurations. And yet even that's not everything; it comes in handier when you get the chance to cook the content of a game several times, so rather than keep doing it through the File menu, you can just cook the content for the game for all the different platforms at once. For example; if you have to change one texture within your game which is supported on five platforms, you can make a profile which will cook the content for all the platforms and arrange them for you at once, and you can spend that time doing something else. The Project Launcher does the whole thing for you. What if you have to cook the game content for different languages? Let's say the game supports 10 languages? Do you have to do it one by one for each language? The answer is simple; the Project Launcher will do it for you. So you can simply think of the Project Launcher as a batch process, custom command-line tool, or even a form of build server. You set the configurations and requests, and leave it alone doing the whole thing for you, while you are saving your time doing something else. It is all about productivity! And the most important part about the Project Launcher is that you can create DLCs very easily. By just setting a profile for it with a different set of options and settings, you can end up with getting the DLC or game mode done without any complications. In a word, it is all about profiles, and because of that let's discuss how to create profiles, that could serve different purposes. Sometimes the Project Launcher proposes for you a standard profile that matching the platform you are using. That is good, but usually those profiles might not have all we need, and that's why it is recommended to always create new profiles to serve our goals. The Project Launcher by default is divided into two sections vertically; the upper part contains the default profiles, while the lower part contains the custom profiles. And in order to create a new profile all you have to do is to hit the plus sign at the bottom part, where it is titled Custom Launch Profiles: Pressing it will take you to a wizard, or it is better to describe this as a window, where you can set up the new profile options. Those options are drastic, and changing between them leads to a completely different result, so you have to be careful. But in general, you mostly will be building either a project for release, or building a DLC or patch for an already released project. Not to mention that you can even do more types of building that serve different goals, such as a language package for an already released game, which is treated as a patch or DLC but at the same time it has different a setup and options than a patch or DLC. Anyway, we will be taking care of the two main types of process that developers usually have to deal with in the Project Launcher: release and patch. Packaging a release After the new Custom Launch Profile wizard window opens, you have changes for its settings that are necessary to make our Release build of the project. This includes: General: This has the following fields: Give a name to the profile, and this name will be displayed in the Project Launcher main window Give a description to the profile in order to make its goals clear for you in the future, or for anyone else who is going to use it Project: This has the following sections: Select a project, the one that needs to be built. Or you can leave this at Any Project, in order to build the current active project: Build: This has the following sections: Indeed, you have to check the box of the build, so you make a build and activate this section of options. From the Build Configuration dropdown, you have to choose a build type, which is Shipping in this case. Finally, you can check the Build UAT (Unreal Automation Tool) option from Advanced Settings in this section. The UAT could be considered as a bunch of scripts creating a set of automated processes, but in order to decide whether to run it or not, you have to really understand what the UAT is: Written in C# (may convert to C++ in the future) Automates repetitive tasks through automation scripts Builds, cooks, packages, deploys and launches projects Invokes UBT for compilation Analyzes and fixes game content files Codes surgery when updating to new engine versions Distributes compilation (XGE) and build system integration Generates code documentation Automates testing of code and content And many others—you can add your own scripts! Now you will know if you want to enable it or not: Cook: This has the following settings: In the Cook section, you need to set it to by the book. This means you need to define what exactly is needed to be cooked and for which platforms it is enough for now to set it to WindowsNoEditor, and check the cultures you want from the list. I have chosen all of them (this is faster than picking one at a time) and then exclude the ones that I don't want: Then you need to check which maps should be cooked; if you can't see maps, it is probably the first build. Later you'll find the maps listed. But anyway, you must keep all the maps listed in the Maps folder under the Content directory: Now from the Release / DLC / Patching Settings section, you have to check the option Create a release version of the game for distribution, as this version going to be the distribution one. And from the same section give the build a version. This is going to create some extra files that will be used in the future if we are going to create patches or DLCs: You can expand the Advanced Settings section to set your own options. By default, Compress Content and Save Packages without versions are both checked, and both are good for the type of build we are making. But also you can set Store all content in a single file (UnrealPak) to keep things tidy; one .pak file is better than lots of separated files. Finally, you can set Cooker Build Configuration to Shipping, as long as we set Build Configuration itself to Shipping: Package: This has the following options: From this section's drop-down menu, choose Package & store locally, and that will save the packages on the drive. You can't set anything else here, unless you want to store the game packaged project into a repository: Deploy: The Deploy section is meant to build the game into the device of your choice, and I don't think it is the case here, or anyone will want to do it. If you want to put the game into a device, you could directly do Launch from within the editor itself. So, let's set this section to Do Not Deploy: Launch: In case you have chosen to deploy the game into a device, then you'll be able to find this section; otherwise, the options here will be disabled. The set of options here is meant to choose the configurations of the deployed build, as once it is deployed to the device it will run. Here you can set something like the language culture, the default startup map, command-line arguments, and so on. And as we are not deploying now, this section will be disabled: Now we have finished editing our profile, you can find a back arrow at the top of this wizard. Pressing it will take you back to the Project Launcher main window: Now you can find our profile in the bottom section. Any other profiles you'll be making in the future will be listed there. Now there is one step to finish the build. In the right corner of the profile there is a button that says Launch This Profile. Hitting it will start the process of building, cooking, and packaging this profile for the selected project. Hit it right away if you want the process to start. And keep in mind, anytime you need to change any of the previously set settings, there is always an Edit button for each profile: The Project Launcher will start processing this profile; it will take some time, but the amount of time depends on your choices. And you'll be able to see all the steps while it is happening. Not only this, but you can also watch a detailed log; you can save this log, or you can even cancel the process at any time: Once everything is done, a new button will appear at the bottom: Done. Hitting it will take you back again to the Project Launcher main window. And you can easily find the build in the SavedStagedBuildsWindowsNoEditor directory of your project, which is in my case: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditor. The most important thing now is that, if you are planning to create patches or DLCs for this project, remember when you set a version number in the Cook section. This produced some files that you can find in: ProjectNameReleaseReleaseVersionPlatform. Which in my case is: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor. There are two files; you have to make sure that you have a backup of them on your drive for future use. Now you can ship the game and upload it to the distribution channel! Packaging a patch or DLC The good news is, there isn't much to do here. Or in other words, you have to do lots of things, but it is a repetitive process. You'll be creating a new profile in the Project Launcher, and you'll be setting 90% of the options so they're the same as the previous release profile; the only difference will be in the Cook options. Which means the settings that will remain the same are: Project Build Package Deploy Launch The only difference is that in the Release/DLC/Patching Settings section of the Cook section you have to: Disable Create a release version of the game for distribution. Set the number of the base build (the release) as the release version this is based on, as this choice will make sure to compare the previous content with the current one. Check Generate patch, if the current build is a patch, not a DLC. Check Build DLC, if the current build is a DLC, not a patch: Now you can launch this profile, and wait until it is done. The patching process creates a *.pak file in the directory: ProjectNameSavedStagedBuildsPlatformNameProjectNameContentPaks. This .pak file is the patch that you'll be uploading to the distribution channel! And the most common way to handle these type of patch is by creating installers; in this case, you'll create an installer to copy the *.pak file into the player's directory: ProjectNameReleasesVersionNumberPlatformName. Which means, it is where the original content *.pak file of the release version is. In my case I copy the *.pak file from: C:UsersMuhammadDesktopBellzReleases1.0WindowsNoEditor to: C:UsersMuhammadDesktopBellzSavedStagedBuildsWindowsNoEditorBellzContentPaks. Now you've found the way to patch and download content, and you have to know, regardless of the time you have spent creating it, it will be faster in the future, because you'll be getting more used to it, and Epic is working on making the process better and better. Summary The Project Launcher is a very powerful tool shipped with the Unreal ecosystem. Using it is not mandatory, but sometimes it is needed to save time, and you learned how and when to use this powerful tool. Many games nowadays have downloadable content; it helps to keep the game community growing, and the game earn more revenue. Having DLCs is not essential, but it is good, having them must be planned earlier as we discussed, and you've learned how to manage them within Unreal Engine. And you learned how to make patches and DLCs using the Unreal Project Launcher. Resources for Article: Further resources on this subject: Development Tricks with Unreal Engine 4 [article] Bang Bang – Let's Make It Explode [article] Lighting basics [article]
Read more
  • 0
  • 0
  • 17324
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-delphi-cookbook
Packt
07 Jul 2016
6 min read
Save for later

Delphi Cookbook

Packt
07 Jul 2016
6 min read
In this article by Daniele Teti author of the book Delphi Cookbook - Second Edition we will study about Multithreading. Multithreading can be your biggest problem if you cannot handle it with care. One of the fathers of the Delphi compiler used to say: "New programmers are drawn to multithreading like moths to flame, with similar results." – Danny Thorpe (For more resources related to this topic, see here.) In this chapter, we will discuss some of the main techniques to handle single or multiple background threads. We'll talk about shared resource synchronization and thread-safe queues and events. The last three recipes will talk about the Parallel Programming Library introduced in Delphi XE7, and I hope that you will love it as much as I love it. Multithreaded programming is a huge topic. So, after reading this chapter, although you will not become a master of it, you will surely be able to approach the concept of multithreaded programming with confidence and will have the basics to jump on to more specific stuff when (and if) you require them. Talking with the main thread using a thread-safe queue Using a background thread and working with its private data is not difficult, but safely bringing information retrieved or elaborated by the thread back to the main thread to show them to the user (as you know, only the main thread can handle the GUI in VCL as well as in FireMonkey) can be a daunting task. An even more complex task would be establishing a generic communication between two or more background threads. In this recipe, you'll see how a background thread can talk to the main thread in a safe manner using the TThreadedQueue<T> class. The same concepts are valid for a communication between two or more background threads. Getting ready Let's talk about a scenario. You have to show data generated from some sort of device or subsystem, let's say a serial, a USB device, a query polling on the database data, or a TCP socket. You cannot simply wait for data using TTimer because this would freeze your GUI during the wait, and the wait can be long. You have tried it, but your interface became sluggish… you need another solution! In the Delphi RTL, there is a very useful class called TThreadedQueue<T> that is, as the name suggests, a particular parametric queue (a FIFO data structure) that can be safely used from different threads. How to use it? In the programming field, there is mostly no single solution valid for all situations, but the following one is very popular. Feel free to change your approach if necessary. However, this is the approach used in the recipe code: Create the queue within the main form. Create a thread and inject the form queue to it. In the thread Execute method, append all generated data to the queue. In the main form, use a timer or some other mechanism to periodically read from the queue and display data on the form. How to do it… Open the recipe project called ThreadingQueueSample.dproj. This project contains the main form with all the GUI-related code and another unit with the thread code. The FormCreate event creates the shared queue with the following parameters that will influence the behavior of the queue: QueueDepth = 100: This is the maximum queue size. If the queue reaches this limit, all the push operations will be blocked for a maximum of PushTimeout, then the Push call will fail with a timeout. PushTimeout = 1000: This is the timeout in milliseconds that will affect the thread, that in this recipe is the producer of a producer/consumer pattern. PopTimeout = 1: This is the timeout in milliseconds that will affect the timer when the queue is empty. This timeout must be very short because the pop call is blocking in nature, and you are in the main thread that should never be blocked for a long time. The button labeled Start Thread creates a TReaderThread instance passing the already created queue to its constructor (this is a particular type of dependency injection called constructor injection). The thread declaration is really simple and is as follows: type TReaderThread = class(TThread) private FQueue: TThreadedQueue<Byte>; protected procedure Execute; override; public constructor Create(AQueue: TThreadedQueue<Byte>); end; While the Execute method simply appends randomly generated data to the queue, note that the Terminated property must be checked often so the application can terminate the thread and wait a reasonable time for its actual termination. In the following example, if the queue is not empty, check the termination at least every 700 msec ca: procedure TReaderThread.Execute; begin while not Terminated do begin TThread.Sleep(200 + Trunc(Random(500))); // e.g. reading from an actual device FQueue.PushItem(Random(256)); end; end; So far, you've filled the queue. Now, you have to read from the queue and do something useful with the read data. This is the job of a timer. The following is the code of the timer event on the main form: procedure TMainForm.Timer1Timer(Sender: TObject); var Value: Byte; begin while FQueue.PopItem(Value) = TWaitResult.wrSignaled do begin ListBox1.Items.Add(Format('[%3.3d]', [Value])); end; ListBox1.ItemIndex := ListBox1.Count - 1; end; That's it! Run the application and see how we are reading the data coming from the threads and showing the main form. The following is a screenshot: The main form showing data generated by the background thread There's more… The TThreadedQueue<T> is very powerful and can be used to communicate between two or more background threads in a consumer/producer schema as well. You can use multiple producers, multiple consumers, or both. The following screenshot shows a popular schema used when the speed at which the data generated is faster than the speed at which the same data is handled. In this case, usually you can gain speed on the processing side using multiple consumers. Single producer, multiple consumers Summary In this article we had a look at how to talk to the main thread using a thread-safe queue. Resources for Article: Further resources on this subject: Exploring the Usages of Delphi[article] Adding Graphics to the Map[article] Application Performance[article]
Read more
  • 0
  • 0
  • 16833

Packt
07 Jul 2016
5 min read
Save for later

AIO setup of OpenStack – preparing the infrastructure code environment

Packt
07 Jul 2016
5 min read
Viewing your OpenStack infrastructure deployment as code will not only simplify node configuration, but also improve the automation process. Despite the existence of numerous system-management tools to bring our OpenStack up and running in an automated way, we have chosen Ansible for automation of our infrastructure. (For more resources related to this topic, see here.) At the end of the day you can choose to use any automation tool that fits your production need, the key point to keep in mind is that to manage a big production environment you must simplify operation by: Automating deployment and operation as much as possible Tracking your changes in a version control system Continuous integration of code to keep you infrastructure updated and bug free Monitoring and testing your infrastructure code to make it robust. We have chosen Git to be our version control system. Let's go ahead and install the Git package on our development system. Check the correctness of the Git installation: If you decide to use IDE like eclipse for you development, it might be easier to install a Git plugin to integrate Git to your IDE. For example, the EGit plugin can be used to develop with Git in Eclipse. We do this by navigating to the Help | Install new software menu entry. You will need to add the following URL to install EGit: http://download.eclipse.org/egit/updates. Preparing the development setup The install process is divided into the following steps: Checkout the OSA repository. Install and bootstrap Ansible. Initial host bootstrap. Run playbooks. Configuring your setup The AIO development environment used the configuration files in the test/roles/bootstrap-host/defaults/main.yml file. This file describes the default values for the host configuration. In addition to the configuration, file the configuration options can be passed through shell environment variables. The BOOTSTRAP_OPTS variable is read by the bootstrap script as a space separated key-value pair. It can be used to pass values to override the default ones in the configuration file: export BOOTSTRAP_OPTS="${BOOTSTRAP_OPTS} bootstrap_host_loopback_cinder_size=512" OSA also allows overriding default values for service configuration. These override values are provides in the etc/openstack_deploy/user_variables.yml file. The following is an example of overriding the values in nova.conf using the override file: nova_nova_conf_overrides: DEFAULT: remove_unused_original_minimum_age_seconds: 43200 libvirt: cpu_mode: host-model disk_cachemodes: file=directsync,block=none database: idle_timeout: 300 max_pool_size: 10 This override file will populate the nova.conf file with the following options: [DEFAULT] remove_unused_original_minimum_age_seconds = 43200 [libvirt] cpu_mode = host-model disk_cachemodes = file=directsync,block=none [database] idle_timeout = 300 max_pool_size = 10 The override variables can also be passed using a per host configuration stanza in /etc/openstack_deploy/openstack_user_config.yml. The complete set of configuration options are described in the OpenStack Ansible documentation at http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-openstack.html. Building the development setup To start the installation process, execute the Ansible bootstrap script. This script will download and install the correct Ansible version. It also creates a wrapper script around ansible-playbook called openstack-ansible that always loads the OpenStack user variable files. Next step is to configure the system for the all-in-one setup. This script does the following tasks: Applies Ansible roles to install the basic software requirements like openSSH and pip. It also applies the bootstrap_host role to check the hard-disk and swap space Create various loopback volumes for use with Cinder, Swift and Nova Prepares networking Finally, we run the playbooks to bring up the AIO development environment. This script will execute the following tasks: Creates the LXC containers Applies security hardening to the host Reinitiates the network bridges Install the infrastructure services like MySQL, RabbitMQ, memcached, and more Finally it installs the various OpenStack services Running the playbooks take a long time to build the containers and start the OpenStack services. Once finished you will have all the OpenStack services running in their private container. You can use the lxc-ls command to list the service containers on the development machine. Use the lxc-attach command to connect to any container as shown here: lxc-attach --name <name_of_container> Use the name of the container from the output of lxc-ls to attach to the container. LXC commands can be used to star and stop the service containers. The AIO environment brings MySQL cluster, which needs special care to start the MySQL cluster if the development machine is rebooted. Details of operating the AIO environment are available in the OpenStack Ansible QuickStart guide at http://docs.openstack.org/developer/openstack-ansible/developer-docs/quickstart-aio.html. Tracking your changes The OSA project itself is maintains its code under version control at the OpenStack git server (http://git.openstack.org/cgit/openstack/openstack-ansible/tree/). The configuration files of OSA are stored at /etc/openstack_ansible/ on the deployment host. These files define the deployment environment and the user override variables. To make sure that you control the deployment environment it is important that the changes to these configuration files are tracked in a version control system. To make sure that you track the development environment make sure that the Vagrant configuration files are also tracked in version control system. Summary So far, we’ve deployed a basic AIO setup of OpenStack. Mastering OpenStack Second Edition will take you through the process of extending our design by clustering, defining the various infrastructure nodes, controller, and compute hosts. Resources for Article: Further resources on this subject: Concepts for OpenStack[article] Introducing OpenStack Trove[article] OpenStack Performance, Availability[article]
Read more
  • 0
  • 0
  • 8467

article-image-functional-programming-c
Packt
07 Jul 2016
4 min read
Save for later

Functional Programming in C#

Packt
07 Jul 2016
4 min read
In this article, we are going to explore the following topics: The introductory of functional programming concept Comparison between functional and imperative approach (For more resources related to this topic, see here.) Introduction to functional programming In functional programming, we use mathematic approach to construct our code. The function we've got in the code has similarity with mathematical function we usually use in our daily basis. The variable in the code function represents the value of the function parameter and it similar to the mathematical function. The idea is a programmer defines the functions, which contain the expression, definition, and also parameters which can be expressed by variable in order to solve the problems. After a programmer builds the function and sends computer the function, it's now computer's turn to do its job. In general, the role of computer is to evaluate the expression in the function and return the result. We can imagine that the computer acts like a calculator since it will analyse the expression from the function and yield the result to the user in printed format. Suppose we have the expression 3 + 5 inside a function. The computer will definitely return 8 as the result just after completely evaluates it. However, it is just the trivial example of the acting of computer in evaluating the expression. In fact, the programmer can increase the ability of the computer by making the complex definition and expression inside the function. Not only can the computer evaluate the trivial expression, but it also can evaluate complex calculation and expression. Comparison to imperative programming The main difference between functional and imperative programming is the existence of side effect. In functional programming, since it applies pure function concept, the side effect is avoided. It's different with imperative programming which has to access I/O and modify state outside the function which will produce side effect. In addition, with an imperative approach, the programmer focuses on the way of performing the task and tracking changes in state while a programmer focuses on the kind of desired information and the kind of required transformation in functional approach. The change of states becomes important in imperative programming while no change of states exist in functional programming. The order of execution is also important in imperative function but not really important in functional programming since we need to concern more on constructing the problem as a set of functions to be execute rather than the detail step of the flow. We will continue our discussion about functional and imperative approach by creating some code in the next topics. Summary We have been acquainted with functional approach so far by discussing the introduction of functional programming. We also have compared the functional approach with mathematical concept in function. It's now clear that functional approach uses the mathematical approach to compose functional program. The comparison between functional and imperative programming has also give us the important point to distinguish the two. It's now clear that in functional programming the programmer focuses on the kind of desired information and the kind of required transformation while in imperative approach the programmer focuses on the way of performing the task and tracking changes in state. For more information on C#, visit the following books: C# 5 First Look (https://www.packtpub.com/application-development/c-5-first-look) C# Multithreaded and Parallel Programming (https://www.packtpub.com/application-development/c-multithreaded-and-parallel-programming) C# 6 and .NET Core 1.0: Modern Cross-Platform Development (https://www.packtpub.com/application-development/c-6-and-net-core-10) Resources for Article: Further resources on this subject: Introduction to Object-Oriented Programming using Python, JavaScript, and C#[article] C# Language Support for Asynchrony[article] C# with NGUI[article]
Read more
  • 0
  • 0
  • 68953

article-image-angulars-component-architecture
Packt
07 Jul 2016
11 min read
Save for later

Angular's component architecture

Packt
07 Jul 2016
11 min read
In this article by Gion Kunz, author of the book Mastering Angular 2 Components, has explained the concept of directives from the first version of Angular changed the game in frontend UI frameworks. This was the first time that I felt that there was a simple yet powerful concept that allowed the creation of reusable UI components. Directives could communicate with DOM events or messaging services. They allowed you to follow the principle of composition, and you could nest directives and create larger directives that solely consisted of smaller directives arranged together. Actually, directives were a very nice implementation of components for the browser. (For more resources related to this topic, see here.) In this section, we'll look into the component-based architecture of Angular 2 and how the previous topic about general UI components will fit into Angular. Everything is a component As an early adopter of Angular 2 and while talking to other people about it, I got frequently asked what the biggest difference is to the first version. My answer to this question was always the same. Everything is a component. For me, this paradigm shift was the most relevant change that both simplified and enriched the framework. Of course, there are a lot of other changes with Angular 2. However, as an advocate of component-based user interfaces, I've found that this change is the most interesting one. Of course, this change also came with a lot of architectural changes. Angular 2 supports the idea of looking at the user interface holistically and supporting composition with components. However, the biggest difference to its first version is that now your pages are no longer global views, but they are simply components that are assembled from other components. If you've been following this chapter, you'll notice that this is exactly what a holistic approach to user interfaces demands. No more pages but systems of components. Angular 2 still uses the concept of directives, although directives are now really what the name suggests. They are orders for the browser to attach a given behavior to an element. Components are a special kind of directives that come with a view. Creating a tabbed interface component Let's introduce a new UI component in our ui folder in the project that will provide us with a tabbed interface that we can use for composition. We use what we learned about content projection in order to make this component reusable. We'll actually create two components, one for Tabs, which itself holds individual Tab components. First, let's create the component class within a new tabs/tab folder in a file called tab.js: import {Component, Input, ViewEncapsulation, HostBinding} from '@angular/core'; import template from './tab.html!text'; @Component({ selector: 'ngc-tab', host: { class: 'tabs__tab' }, template, encapsulation: ViewEncapsulation.None }) export class Tab { @Input() name; @HostBinding('class.tabs__tab--active') active = false; } The only state that we store in our Tab component is whether the tab is active or not. The name that is displayed on the tab will be available through an input property. We use a class property binding to make a tab visible. Based on the active flag we set a class; without this, our tabs are hidden. Let's take a look at the tab.html template file of this component: <ng-content></ng-content> This is it already? Actually, yes it is! The Tab component is only responsible for the storage of its name and active state, as well as the insertion of the host element content in the content projection point. There's no additional templating that is needed. Now, we'll move one level up and create the Tabs component that will be responsible for the grouping all the Tab components. As we won't include Tab components directly when we want to create a tabbed interface but use the Tabs component instead, this needs to forward content that we put into the Tabs host element. Let's look at how we can achieve this. In the tabs folder, we will create a tabs.js file that contains our Tabs component code, as follows: import {Component, ViewEncapsulation, ContentChildren} from '@angular/core'; import template from './tabs.html!text'; // We rely on the Tab component import {Tab} from './tab/tab'; @Component({ selector: 'ngc-tabs', host: { class: 'tabs' }, template, encapsulation: ViewEncapsulation.None, directives: [Tab] }) export class Tabs { // This queries the content inside <ng-content> and stores a // query list that will be updated if the content changes @ContentChildren(Tab) tabs; // The ngAfterContentInit lifecycle hook will be called once the // content inside <ng-content> was initialized ngAfterContentInit() { this.activateTab(this.tabs.first); } activateTab(tab) { // To activate a tab we first convert the live list to an // array and deactivate all tabs before we set the new // tab active this.tabs.toArray().forEach((t) => t.active = false); tab.active = true; } } Let's observe what's happening here. We used a new @ContentChildren annotation, in order to query our inserted content for directives that match the type that we pass to the decorator. The tabs property will contain an object of the QueryList type, which is an observable list type that will be updated if the content projection changes. You need to remember that content projection is a dynamic process as the content in the host element can actually change, for example, using the NgFor or NgIf directives. We use the AfterContentInit lifecycle hook, which we've already briefly discussed in the Custom UI elements section of Chapter 2, Ready, Set, Go! This lifecycle hook is called after Angular has completed content projection on the component. Only then we have the guarantee that our QueryList object will be initialized, and we can start working with child directives that were projected as content. The activateTab function will set the Tab components active flag, deactivating any previous active tab. As the observable QueryList object is not a native array, we first need to convert it using toArray() before we start working with it. Let's now look at the template of the Tabs component that we created in a file called tabs.html in the tabs directory: <ul class="tabs__tab-list"> <li *ngFor="let tab of tabs"> <button class="tabs__tab-button" [class.tabs__tab-button--active]="tab.active" (click)="activateTab(tab)">{{tab.name}}</button> </li> </ul> <div class="tabs__l-container"> <ng-content select="ngc-tab"></ng-content> </div> The structure of our Tabs component is as follows. First we render all the tab buttons in an unordered list. After the unordered list, we have a tabs container that will contain all our Tab components that are inserted using content projection and the <ng-content> element. Note that the selector that we use is actually the selector we use for our Tab component. Tabs that are not active will not be visible because we control this using CSS on our Tab component class attribute binding (refer to the Tab component code). This is all that we need to create a flexible and well-encapsulated tabbed interface component. Now, we can go ahead and use this component in our Project component to provide a segregation of our project detail information. We will create three tabs for now where the first one will embed our task list. We will address the content of the other two tabs in a later chapter. Let's modify our Project component template in the project.html file as a first step. Instead of including our TaskList component directly, we now use the Tabs and Tab components to nest the task list into our tabbed interface: <ngc-tabs> <ngc-tab name="Tasks"> <ngc-task-list [tasks]="tasks" (tasksUpdated)="updateTasks($event)"> </ngc-task-list> </ngc-tab> <ngc-tab name="Comments"></ngc-tab> <ngc-tab name="Activities"></ngc-tab> </ngc-tabs> You should have noticed by now that we are actually nesting two components within this template code using content projection, as follows: First, the Tabs component uses content projection to select all the <ngc-tab> elements. As these elements happen to be components too (our Tab component will attach to elements with this name), they will be recognized as such within the Tabs component once they are inserted. In the <ngc-tab> element, we then nest our TaskList component. If we go back to our Task component template, which will be attached to elements with the name ngc-tab, we will have a generic projection point that inserts any content that is present in the host element. Our task list will effectively be passed through the Tabs component into the Tab component. The visual efforts timeline Although the components that we created so far to manage efforts provide a good way to edit and display effort and time durations, we can still improve this with some visual indication. In this section, we will create a visual efforts timeline using SVG. This timeline should display the following information: The total estimated duration as a grey background bar The total effective duration as a green bar that overlays on the total estimated duration bar A yellow bar that shows any overtime (if the effective duration is greater than the estimated duration) The following two figures illustrate the different visual states of our efforts timeline component: The visual state if the estimated duration is greater than the effective duration The visual state if the effective duration exceeds the estimated duration (the overtime is displayed as a yellow bar) Let's start fleshing out our component by creating a new EffortsTimeline Component class on the lib/efforts/efforts-timeline/efforts-timeline.js path: … @Component({ selector: 'ngc-efforts-timeline', … }) export class EffortsTimeline { @Input() estimated; @Input() effective; @Input() height; ngOnChanges(changes) { this.done = 0; this.overtime = 0; if (!this.estimated && this.effective || (this.estimated && this.estimated === this.effective)) { // If there's only effective time or if the estimated time // is equal to the effective time we are 100% done this.done = 100; } else if (this.estimated < this.effective) { // If we have more effective time than estimated we need to // calculate overtime and done in percentage this.done = this.estimated / this.effective * 100; this.overtime = 100 - this.done; } else { // The regular case where we have less effective time than // estimated this.done = this.effective / this.estimated * 100; } } } Our component has three input properties: estimated: This is the estimated time duration in milliseconds effective: This is the effective time duration in milliseconds height: This is the desired height of the efforts timeline in pixels In the OnChanges lifecycle hook, we set two component member fields, which are based on the estimated and effective time: done: This contains the width of the green bar in percentage that displays the effective duration without overtime that exceeds the estimated duration overtime: This contains the width of the yellow bar in percentage that displays any overtime, which is any time duration that exceeds the estimated duration Let's look at the template of the EffortsTimeline component and see how we can now use the done and overtime member fields to draw our timeline. We will create a new lib/efforts/efforts-timeline/efforts-timeline.html file: <svg width="100%" [attr.height]="height"> <rect [attr.height]="height" x="0" y="0" width="100%" class="efforts-timeline__remaining"></rect> <rect *ngIf="done" x="0" y="0" [attr.width]="done + '%'" [attr.height]="height" class="efforts-timeline__done"></rect> <rect *ngIf="overtime" [attr.x]="done + '%'" y="0" [attr.width]="overtime + '%'" [attr.height]="height" class="efforts-timeline__overtime"></rect> </svg> Our template is SVG-based, and it contains three rectangles for each of the bars that we want to display. The background bar that will be visible if there is remaining effort will always be displayed. Above the remaining bar, we conditionally display the done and the overtime bar using the calculated widths from our component class. Now, we can go ahead and include the EffortsTimeline class in our Efforts component. This way our users will have visual feedback when they edit the estimated or effective duration, and it provides them a sense of overview. Let's look into the template of the Efforts component to see how we integrate the timeline: … <ngc-efforts-timeline height="10" [estimated]="estimated" [effective]="effective"> </ngc-efforts-timeline> As we have the estimated and effective duration times readily available in our Efforts component, we can simply create a binding to the EffortsTimeline component input properties: The Efforts component displaying our newly-created efforts timeline component (the overtime of six hours is visualized with the yellow bar) Summary In this article, we learned about the architecture of the components in Angular. We also learned how to create a tabbed interface component and how to create a visual efforts timeline using SVG. Resources for Article: Further resources on this subject: Angular 2.0[article] AngularJS Project[article] AngularJS[article]
Read more
  • 0
  • 0
  • 8756
article-image-animating-elements
Packt
05 Jul 2016
17 min read
Save for later

Animating Elements

Packt
05 Jul 2016
17 min read
In this article by Alex Libby, author of the book Mastering PostCSS for Web Design, you will study about animating elements. A question if you had the choice of three websites: one static, one with badly done animation, and one that has been enhanced with subtle use of animation. Which would you choose? Well, my hope is the answer to that question should be number three: animation can really make a website stand out if done well, or fail miserably if done badly! So far, our content has been relatively static, save for the use of media queries. It's time though to take a look at how PostCSS can help make animating content a little easier. We'll begin with a quick recap on the basics of animation before exploring the route to moving away from pure animation through to SASS and finally across to PostCSS. We will cover a number of topics throughout this article, which will include: A recap on the use of jQuery to animate content Switching to CSS-based animation Exploring the use of prebuilt libraries, such as Animate.css (For more resources related to this topic, see here.) Let's make a start! Revisiting basic animations Animation is quickly becoming a king in web development; more and more websites are using animations to help bring life and keep content fresh. If done correctly, they add an extra layer of experience for the end user; if done badly, the website will soon lose more custom than water through a sieve! Throughout the course of the article, we'll take a look at making the change from writing standard animation through to using processors, such as SASS, and finally, switching to using PostCSS. I can't promise you that we'll be creating complex JavaScript-based demos, such as the Caaaat animation (http://roxik.com/cat/ try resizing the window!), but we will see that using PostCSS is really easy when creating animations for the browser. To kick off our journey, we'll start with a quick look at the traditional animation. How many times have you had to use .animate() in jQuery over the years? Thankfully, we have the power of CSS3 to help with simple animations, but there was a time when we had to animate content using jQuery. As a quick reminder, try running animate.html from the T34 - Basic animation using jQuery animate() folder. It's not going to set the world on fire, but is a nice reminder of the times gone by, when we didn't know any better: If we take a look at a profile of this animation from within a DOM inspector from within a browser, such as Firefox, it would look something like this screenshot: While the numbers aren't critical, the key point here are the two dotted green lines and that the results show a high degree of inconsistent activity. This is a good indicator that activity is erratic, with a low frame count, resulting in animations that are jumpy and less than 100% smooth. The great thing though is that there are options available to help provide smoother animations; we'll take a brief look at some of the options available before making the change to using PostCSS. For now though, let's make that first step to moving away from using jQuery, beginning with a look at the options available for reducing dependency on the use of .animate() or jQuery. Moving away from jQuery Animating content can be a contentious subject, particularly if jQuery or JavaScript is used. If we were to take a straw poll of 100 people and ask which they used, it is very likely that we would get mixed answers! A key answer of "it depends" is likely to feature at or near the top of the list of responses; many will argue that animating content should be done using CSS, while others will affirm that JavaScript-based solutions still have value. Leaving this aside, shall we say lively debate? If we're looking to move away from using jQuery and in particular .animate(), then we have some options available to us: Upgrade your version of jQuery! Yes, this might sound at odds with the theme of this article, but the most recent versions of jQuery introduced the use of requestAnimationFrame, which improved performance, particularly on mobile devices. A quick and dirty route is to use the jQuery Animate Enhanced plugin, available from http://playground.benbarnett.net/jquery-animate-enhanced/ - although a little old, it still serves a useful purpose. It will (where possible) convert .animate() calls into CSS3 equivalents; it isn't able to convert all, so any that are not converted will remain as .animate() calls. Using the same principle, we can even take advantage of the JavaScript animation library, GSAP. The Greensock team have made available a plugin (from https://greensock.com/jquery-gsap-plugin) that replaces jQuery.animate() with their own GSAP library. The latter is reputed to be 20 times faster than standard jQuery! With a little effort, we can look to rework our existing code. In place of using .animate(), we can add the equivalent CSS3 style(s) into our stylesheet and replace existing calls to .animate() with either .removeClass() or .addClass(), as appropriate. We can switch to using libraries, such as Transit (http://ricostacruz.com/jquery.transit/). It still requires the use of jQuery, but gives better performance than using the standard .animate() command. Another alternative is Velocity JS by Jonathan Shapiro, available from http://julian.com/research/velocity/; this has the benefit of not having jQuery as a dependency. There is even talk of incorporating all or part of the library into jQuery, as a replacement for .animate(). For more details, check out the issue log at https://github.com/jquery/jquery/issues/2053. Many people automatically assume that CSS animations are faster than JavaScript (or even jQuery). After all, we don't need to call an external library (jQuery); we can use styles that are already baked into the browser, right? The truth is not as straightforward as this. In short, the right use of either will depend on your requirements and the limits of each method. For example, CSS animations are great for simple state changes, but if sequencing is required, then you may have to resort to using the JavaScript route. The key, however, is less in the method used, but more in how many frames per second are displayed on the screen. Most people cannot distinguish above 60fps. This produces a very smooth experience. Anything less than around 25FPS will produce blur and occasionally appear jerky – it's up to us to select the best method available, that produces the most effective solution. To see the difference in frame rate, take a look at https://frames-per-second.appspot.com/ the animations on this page can be controlled; it's easy to see why 60FPS produces a superior experience! So, which route should we take? Well, over the next few pages, we'll take a brief look at each of these options. In a nutshell, they are all methods that either improve how animations run or allow us to remove the dependency on .animate(), which we know is not very efficient! True, some of these alternatives still use jQuery, but the key here is that your existing code could be using any or a mix of these methods. All of the demos over the next few pages were run at the same time as a YouTube video was being run; this was to help simulate a little load and get a more realistic comparison. Running animations under load means less graphics processing power is available, which results in a lower FPS count. Let's kick off with a look at our first option—the Transit JS library. Animating content with Transit.js In an ideal world, any project we build will have as few dependencies as possible; this applies equally to JavaScript or jQuery-based content as CSS styling. To help with reducing dependencies, we can use libraries such as TransitJS or Velocity to construct our animations. The key here is to make use of the animations that these libraries create as a basis for applying styles that we can then manipulate using .addClass() or .removeClass(). To see what I mean, let's explore this concept with a simple demo: We'll start by opening up a copy of animate.html. To make it easier, we need to change the reference to square-small from a class to a selector: <div id="square-small"></div> Next, go ahead and add in a reference to the Transit library immediately before the closing </head> tag: <script src="js/jquery.transit.min.js"></script> The Transit library uses a slightly different syntax, so go ahead and update the call to .animate() as indicated: smallsquare.transition({x: 280}, 'slow'); Save the file and then try previewing the results in a browser. If all is well, we should see no material change in the demo. But the animation will be significantly smoother—the frame count is higher, at 44.28fps, with less dips. Let's compare this with the same profile screenshot taken for revisiting basic animations earlier in this article. Notice anything? Profiling browser activity can be complex, but there are only two things we need to concern ourselves with here: the fps value and the state of the green line. The fps value, or frames per second, is over three times higher, and for a large part, the green line is more consistent with fewer more short-lived dips. This means that we have a smoother, more consistent performance; at approximately 44fps, the average frame rate is significantly better than using standard jQuery. But we're still using jQuery! There is a difference though. Libraries such as Transit or Velocity convert animations where possible to CSS3 equivalents. If we take a peek under the covers, we can see this in the flesh: We can use this to our advantage by removing the need to use .animate() and simply use .addClass() or .removeClass(). If you would like to compare our simple animation when using Transit or Velocity, there are examples available in the code download, as demos T35A and T35B, respectively. To take it to the next step, we can use the Velocity library to create a version of our demo using plain JavaScript. We'll see how as part of the next demo. Beware though this isn't an excuse to still use JavaScript; as we'll see, there is little difference in the frame count! Animating with plain JavaScript Many developers are used to working with jQuery. After all, it makes it a cinch to reference just about any element on a page! Sometimes though, it is preferable to work in native JavaScript; this could be for speed. If we only need to support newer browsers (such as IE11 or Edge, and recent versions of Chrome or Firefox), then adding jQuery as a dependency isn't always necessary. The beauty about libraries, such as Transit (or Velocity), means that we don't always have to use jQuery to still achieve the same effect; as we'll see shortly, removing jQuery can help improve matters! Let's put this to the test and adapt our earlier demo to work without using jQuery: We'll start by extracting a copy of the T35B folder from the code download bundle. Save this to the root of our project area. Next, we need to edit a copy of animate.html within this folder. Go ahead and remove the link to jQuery and then remove the link to velocity.ui.min.js; we should be left with this in the <head> of our file: <link rel="stylesheet" type="text/css" href="css/style.css">   <script src="js/velocity.min.js"></script> </head> A little further down, alter the <script> block as shown:   <script>     var smallsquare = document.getElementById('square-small');     var animbutton = document.getElementById('animation-button');     animbutton.addEventListener("click", function() {       Velocity(document.getElementById('square-small'), {left: 280}, {duration: 'slow'});     }); </script> Save the file and then preview the results in a browser. If we monitor performance of our demo using a DOM Inspector, we can see a similar frame rate being recorded in our demo: With jQuery as a dependency no longer in the picture, we can clearly see that the frame rate is improved—the downside though is that the support is reduced for some browsers, such as IE8 or 9. This may not be an issue for your website; both Microsoft and the jQuery Core Team have announced changes to drop support for IE8-10 and IE8 respectively, which will help encourage users to upgrade to newer browsers. It has to be said though that while using CSS3 is preferable for speed and keeping our pages as lightweight as possible, using Velocity does provide a raft of extra opportunities that may be of use to your projects. The key here though is to carefully consider if you really do need them or whether CSS3 will suffice and allow you to use PostCSS. Switching classes using jQuery At this point, there is one question that comes to mind—what about using class-based animation? By this, I mean dropping any dependency on external animation libraries, and switching to use plain jQuery with either .addClass() or .removeClass() methods. In theory, it sounds like a great idea—we can remove the need to use .animate() and simply swap classes as needed, right? Well, it's an improvement, but it is still lower than using a combination of pure JavaScript and switching classes. It will all boil down to a trade-off between using the ease of jQuery to reference elements against pure JavaScript for speed, as follows: 1.      We'll start by opening a copy of animate.html from the previous exercise. First, go ahead and replace the call to VelocityJS with this line within the <head> of our document: <script src="js/jquery.min.js"></script> 2.      Next, remove the code between the <script> tags and replace it with this: var smallsquare = $('.rectangle').find('.square-small'); $('#animation-button').on("click", function() {       smallsquare.addClass("move");       smallsquare.one('transitionend', function(e) {     $('.rectangle').find('.square-small') .removeClass("move");     });  }); 3.      Save the file. If we preview the results in a browser, we should see no apparent change in how the demo appears, but the transition is marginally more performant than using a combination of jQuery and Transit. The real change in our code though, will be apparent if we take a peek under the covers using a DOM Inspector. Instead of using .animate(), we are using CSS3 animation styles to move our square-small <div>. Most browsers will accept the use of transition and transform, but it is worth running our code through a process, such as Autocomplete, to ensure we apply the right vendor prefixes to our code. The beauty about using CSS3 here is that while it might not suit large, complex animations, we can at least begin to incorporate the use of external stylesheets, such as Animate.css, or even use a preprocessor, such as SASS to create our styles. It's an easy change to make, so without further ado and as the next step on our journey to using PostCSS, let's take a look at this in more detail. If you would like to create custom keyframe-based animations, then take a look at http://cssanimate.com/, which provides a GUI-based interface for designing them and will pipe out the appropriate code when requested! Making use of prebuilt libraries Up to this point, all of our animations have had one thing in common; they are individually created and stored within the same stylesheet as other styles for each project. This will work perfectly well, but we can do better. After all, it's possible that we may well create animations that others have already built! Over time, we may also build up a series of animations that can form the basis of a library that can be reused for future projects. A number of developers have already done this. One example of note is the Animate.css library created by Dan Eden. In the meantime, let's run through a quick demo of how it works as a precursor to working with it in PostCSS. The images used in this demo are referenced directly from the LoremPixem website as placeholder images. Let's make a start: We'll start by extracting a copy of the T37 folder from the code download bundle. Save the folder to our project area. Next, open a new file and add the following code: body { background: #eee; } #gallery {   width: 745px;   height: 500px;   margin-left: auto;   margin-right: auto; }   #gallery img {   border: 0.25rem solid #fff;   margin: 20px;   box-shadow: 0.25rem 0.25rem 0.3125rem #999;   float: left; } .animated {   animation-duration: 1s; animation-fill-mode: both; } .animated:hover {   animation-duration: 1s;   animation-fill-mode: both; }  Save this as style.css in the css subfolder within the T37 folder. Go ahead and preview the results in a browser. If all is well, then we should see something akin to this screenshot: If we run the demo, we should see images run through different types of animation; there is nothing special or complicated here. The question is though, how does it all fit in with PostCSS? Well, there's a good reason for this; there will be some developers who have used Animate.css in the past and will be familiar with how it works; we will also be using a the postcss-animation plugin later in Updating code to use PostCSS, which is based on the Animate.css stylesheet library. For those of you who are not familiar with the stylesheet library though, let's quickly run through how it works within the context of our demo. Dissecting the code to our demo The effects used in our demo are quite striking. Indeed, one might be forgiven for thinking that they required a lot of complex JavaScript! This, however, could not be further from the truth. The Animate.css file contains a number of animations based on @keyframe similar to this: @keyframes bounce {   0%, 20%, 50%, 80%, 100% {transform: translateY(0);}   40% {transform: translateY(-1.875rem);}   60% {transform: translateY(-0.9375rem);} } We pull in the animations using the usual call to the library within the <head> section of our code. We can then call any animation by name from within our code:   <div id="gallery">     <a href="#"><img class="animated bounce" src="http://lorempixum.com/200/200/city/1" alt="" /></a> ...   </div>   </body> You will notice the addition of the .animated class in our code. This controls the duration and timing of the animation, which are set according to which animation name has been added to the code. The downside of not using JavaScript (or jQuery for that matter) means that the animation will only run once when the demo is loaded; we can set it to run continuously by adding the .infinite class to the element being animated (this is part of the Animate library). We can fake a click option in CSS, but it is an experimental hack that is not supported across all the browsers. To affect any form of control, we really need to use JavaScript (or even jQuery)! If you are interested in details of the hack, then take a look at this response on Stack Overflow at http://stackoverflow.com/questions/13630229/can-i-have-an-onclick-effect-in-css/32721572#32721572. Okay! Onward we go. We've covered the basic use of prebuilt libraries, such as Animate. It's time to step up a gear and make the transition to PostCSS. Summary In this article, we studied about recap on the use of jQuery to animate content. We also looked into switching to CSS-based animation. At last, we saw how to make use of prebuilt libraries in short.  Resources for Article:   Further resources on this subject: Responsive Web Design with HTML5 and CSS3 - Second Edition [article] Professional CSS3 [article] Instant LESS CSS Preprocessor How-to [article]
Read more
  • 0
  • 0
  • 9929

article-image-getting-started-vr-programming
Jake Rheude
04 Jul 2016
8 min read
Save for later

Getting Started with VR Programming

Jake Rheude
04 Jul 2016
8 min read
This guide will go through some simple programming for VR apps using the Google VR SDK (software development kit) and the Unity3D game engine. This guide will assume that you already have a mobile device capable of running Google VR apps with a Google Cardboard, as well as a computer able to run Unity3D. Getting Started First and foremost, download the latest version of Unity3D from their website. Out of the four options, select “Personal” since it costs nothing to the user. Then download and run the installer. The installation process is straightforward. However, you must make sure that you select the “Android Build Support” component if you are planning on using an Android device or “iOS Build Support” for an iOS device. If you are unsure at this point, just select both, as neither of them requires a lot of space. Now that you have Unity3D installed, the next step is to set it up for the Google VR SDK which can be found here. After agreeing to the terms and conditions, you will be given a link to download the repository directly. After downloading and extracting the ZIP file, you will notice that it contains a Unity Package file. Double-click on the file, and Unity will automatically load up. You will then see a window similar to the pop up below on your screen. Click the “NEW” button on the top right corner to begin your first Google VR project. Give it any project name other than the default “New Unity Project” name. For this guide, I have chosen “VR Programming Tutorial” as the project name.   As soon as your new project loads up, so will the Google VR SDK Unity Package. The relevant files should all be selected by default, so simply click the “Import” button on the bottom right corner to include the SDK into your project.   In your project’s “Assets” folder, there should be a folder named “GoogleVR”. This is where all the necessary components are located in order to begin working with the SDK.   From the “Assets” folder, go into “GoogleVR”->”DemoScenes”->”HeadSetDemo”. Double-click on the Unity icon that is named “DemoScene”. You should see something similar to this upon opening the scene file. This is where you can preview the scene before playing it to get an idea of how the game objects will be laid out in the environment. So let’s try that by clicking on the “Play” button. The scene will start out from the user’s perspective, which would be the main camera.   There is a slight difference in how the left eye and right eye camera are displaying the environment. This is called distortion correction, which is intentionally designed that way in order to accustom the display to the Google Cardboard eye lenses. You may be wondering why you are unable to look around with your mouse. This design is also intentional to allow the developer to hover the mouse pointer in and out of the game window without disrupting the scene while it is playing. In order to look around in the environment, hold down the Ctrl key, and then the Alt key to enable head movement. Make sure to press the keys in this order, otherwise you will only be rotating the display along the Z-axis. You might also be wondering where the interactive menu on the floor canvas has gone. The menu is still there, it’s just that it does not appear in VR mode. Notice that the dot in the center of the display will turn into a halo when you move it over the hovering cube. This happens whenever the dot is placed over a game object in the environment that is interactive. So even if the menu is not visible, you are still able to select the menu items. If you happen to click on the “VR Mode” button, the left eye and right eye cameras will simply go away and the main camera will be the only camera that displays the world space. VR Mode can be enabled/disabled by clicking on the "VR Mode Enabled" checkbox in the project's inspector. Simply select "GvrMain" in the DemoScene hierarchy to have the inspector display its information. How the scene is displayed when VR mode is disabled. Note that as of the current implementation of Google VR, it is impossible to add UI components into the world space. This is due to the stereoscopic functionality of Google VR and the mathematics involved in calculating the distance of the game objects from the left eye and right eye cameras relative to the world environment. However, it is possible to add non-interactive UI elements (i.e. player HUD) as a child 3D element with the main camera being its parent. If you wish to create interactive UI components, they must be done strictly as game objects in the world space. This also implies that the interactive UI components must be selected by the user from a fixed position in the world space, as they would find it difficult to make their selections otherwise. Now that we have gone over the basics of the Google VR SDK, let’s move onto some programming. Applying Attributes to Game Objects When creating an interactive medium of any kind (in this case a VR app), some of the most basic functions can end up being more complicated than they initially seem to be. We will demonstrate that by incorporating, what seems to be, simple user movement. In the same DemoScene scene, we will add four more cubes to the environment. For the sake of cleanliness, first we will remove the existing cube as it will be an obstruction for our new cube. To delete a game object from a scene, simply right-click it in the hierarchy and select "Delete". Now that we have removed the existing cube, add a new one by clicking "Create" in the hierarchy, select "3D Object" and then "Cube".   Move the cube about 4-5 units along the X or Z axis away from the origin. You can do so by clicking and dragging the red or blue arrow. Now that we have added our cube, the next step is to add a script to the player’s perspective object. For this project, we can use the “GvrMain” game object to incorporate the player’s movement. In the inspector tab, click on the "Add Component" button, select "New Script" and create a new script titled "MoveToCube".   Once the script has been created, click on the cogwheel icon and select "Edit Script".   Copy and paste this code below into MoveToCube.cs Next, add an Event Trigger component to your cube.   Create a new script titled "CubeSelect". Then select the cogwheel icon and select "Edit Script" to open the script in the script editor.   Copy and paste the code below into your CubeSelect.cs script.     Click on the "Add New Event Type" button. Select "PointerClick". Click the + icon to add a method to refer to. In the left box, select the "Cube" game object. For the method, select "CubeSelect" and then click on "GetCubePosition". Finally, select "GvrMain" as the target game object for the method. When you are finished adding the necessary components, copy and paste the cube in the project hierarchy tab three times in order to get four cubes. They will seem as if they did not appear on the scene, only because they are overlapping each other. Change the positions of each cube so that they are separated from each other along the X and Z axis. Once completed, the scene should look something similar to this: Now you can run the VR app and see for yourself that we have now incorporated player movement in this basic implementation. Tips and General Advice Many developers recommend that you do not incorporate any acceleration and/or deceleration to the main camera. Doing so will cause nausea to the users and thus give them a negative experience with your VR application. Keep your VR app relatively simple! The user only has two modes of input: head tracking and the Cardboard trigger button. Trying to force functionality with multiple gestures (i.e. looking straight down and/or up) will not be intuitive to the user and will more than likely cause frustration. About the Author Jake Rheude is the Director of Business Development for Red Stag Fulfillment, a US-based e-commerce fulfillment provider focused primarily on serving ecommerce businesses shipping heavy, large, or valuable products to customers all around the world. Red Stag is so confident in its fulfillment software combined with their warehouse operations, that for any error, inaccuracy, or late shipment, not only will they reimburse you for that order, but they’ll write you a check for $50.
Read more
  • 0
  • 0
  • 28071

article-image-data-science-r
Packt
04 Jul 2016
16 min read
Save for later

Data Science with R

Packt
04 Jul 2016
16 min read
In this article by Matthias Templ, author of the book Simulation for Data Science with R, we will cover: What is meant bydata science A short overview of what Ris The essential tools for a data scientist in R (For more resources related to this topic, see here.) Data science Looking at the job market it is no doubt that the industry needs experts on data science. But what is data science and what's the difference to statistics or computational statistics? Statistics is computing with data. In computational statistics, methods and corresponding software are developed in a highly data-depended manner using modern computational tools. Computational statistics has a huge intersection with data science. Data science is the applied part of computational statistics plus data management including storage of data, data bases, and data security issues. The term data science is used when your work is driven by data with a less strong component on method and algorithm development as computational statistics, but with a lot of pure computer science topics related to storing, retrieving, and handling data sets. It is the marriage of computer science and computational statistics. As an example to show differences, we took the broad area of visualization. A data scientist is also interested in pure process related visualizations (airflows in an engine, for example),while in computational statistics, methods for visualization of data and statistical results are onlytouched upon. Data science is the management of the entire modelling process, from data collection to automatized reporting and presenting the results. Storage and managing data, data pre-processing (editing, imputation), data analysis, and modelling are included in this process. Data scientists use statistics and data-oriented computer science tools to solve the problems they face. R R has become an essential tool for statistics and data science(Godfrey 2013). As soon as data scientists have to analyze data, R might be the first choice. The opensource programming language and software environment, R, is currently one of the most widely used and popular software tools for statistics and data analysis. It is available at the Comprehensive R Archive Network (CRAN) as free software under the terms of the Free Software Foundation's GNU General Public License (GPL) in source code and binary form. The R Core Team defines R as an environment. R is an integrated suite of software facilities for data manipulation, calculation, and graphical display. Base R includes: A suite of operators for calculations on arrays, mostly written in C and integrated in R Comprehensive, coherent, and integrated collection of methods for data analysis Graphical facilities for data analysis and display, either on-screen or in hard copy A well-developed, simple, and effective programming language thatincludes conditional statements, loops, user-defined recursive functions, and input and output facilities A flexible object-oriented system facilitating code reuse High performance computing with interfaces to compiled code and facilities for parallel and grid computing The ability to be extended with (add-on) packages An environment that allows communication with many other software tools Each R package provides a structured standard documentation including code application examples. Further documents(so called vignettes???)potentially show more applications of the packages and illustrate dependencies between the implemented functions and methods. R is not only used extensively in the academic world, but also companies in the area of social media (Google, Facebook, Twitter, and Mozilla Corporation), the banking world (Bank of America, ANZ Bank, Simple), food and pharmaceutical areas (FDA, Merck, and Pfizer), finance (Lloyd, London, and Thomas Cook), technology companies (Microsoft), car construction and logistic companies (Ford, John Deere, and Uber), newspapers (The New York Times and New Scientist), and companies in many other areas; they use R in a professional context(see also, Gentlemen 2009andTippmann 2015). International and national organizations nowadays widely use R in their statistical offices(Todorov and Templ 2012 and Templ and Todorov 2016). R can be extended with add-on packages, and some of those extensions are especially useful for data scientists as discussed in the following section. Tools for data scientists in R Data scientists typically like: The flexibility in reading and writing data including the connection to data bases To have easy-to-use, flexible, and powerful data manipulation features available To work with modern statistical methodology To use high-performance computing tools including interfaces to foreign languages and parallel computing Versatile presentation capabilities for generating tables and graphics, which can readily be used in text processing systems, such as LaTeX or Microsoft Word To create dynamical reports To build web-based applications An economical solution The following presented tools are related to these topics and helps data scientists in their daily work. Use a smart environment for R Would you prefer to have one environment that includes types of modern tools for scientific computing, programming and management of data and files, versioning, output generation that also supports a project philosophy, code completion, highlighting, markup languages and interfaces to other software, and automated connections to servers? Currently two software products supports this concept. The first one is Eclipse with the extensionSTATET or the modified Eclipse IDE from Open Analytics called Architect. The second is a very popular IDE for R called RStudio, which also includes the named features and additionally includes an integration of the packages shiny(RStudio, Inc. 2014)for web-based development and integration of R and rmarkdown(Allaire et al. 2015). It provides a modern scientific computing environment, well designed and easy to use, and most importantly, distributed under GPL License. Use of R as a mediator Data exchange between statistical systems, database systems, or output formats is often required. In this respect, R offers very flexible import and export interfaces either through its base installation but mostly through add-on packages, which are available from CRAN or GitHub. For example, the packages xml2(Wickham 2015a)allow to read XML files. For importing delimited files, fixed width files, and web log files, it is worth mentioning the package readr(Wickham and Francois 2015a)or data.table(Dowle et al. 2015)(functionfread), which are supposed to be faster than the available functions in base R. The packages XLConnect(Mirai Solutions GmbH 2015)can be used to read and write Microsoft Excel files including formulas, graphics, and so on. The readxlpackage(Wickham 2015b)is faster for data import but do not provide export features. The foreignpackages(R Core Team 2015)and a newer promising package called haven(Wickham and Miller 2015)allow to read file formats from various commercial statistical software. The connection to all major database systems is easily established with specialized packages. Note that theROBDCpackage(Ripley and Lapsley 2015)is slow but general, while other specialized packages exists for special data bases. Efficient data manipulation as the daily job Data manipulation, in general but in any case with large data, can be best done with the dplyrpackage(Wickham and Francois 2015b)or the data.tablepackage(Dowle et al. 2015). The computational speed of both packages is much faster than the data manipulation features of base R, while data.table is slightly faster than dplyr using keys and fast binary search based methods for performance improvements. In the author's viewpoint, the syntax of dplyr is much easier to learn for beginners as the base R data manipulation features, and it is possible to write thedplyr syntax using data pipelines that is internally provided by package magrittr(Bache and Wickham 2014). Let's take an example to see the logical concept. We want to compute a new variableEngineSizeas the square ofEngineSizefrom the data set Cars93. For each group, we want to compute the minimum of the new variable. In addition, the results should be sorted in descending order: data(Cars93, package = "MASS") library("dplyr") Cars93 %>%   mutate(ES2 = EngineSize^2) %>%   group_by(Type) %>%   summarize(min.ES2 = min(ES2)) %>%   arrange(desc(min.ES2)) ## Source: local data frame [6 x 2] ## ##      Type min.ES2 ## 1   Large   10.89 ## 2     Van    5.76 ## 3 Compact    4.00 ## 4 Midsize    4.00 ## 5  Sporty    1.69 ## 6   Small    1.00 The code is somehow self-explanatory, while data manipulation in base R and data.table needs more expertise on syntax writing. In the case of large data files thatexceed available RAM, interfaces to (relational) database management systems are available, see the CRAN task view on high-performance computingthat includes also information about parallel computing. According to data manipulation, the excellent packages stringr, stringi, and lubridate for string operations and date-time handling should also be mentioned. The requirement of efficient data preprocessing A data scientist typically spends a major amount of time not only ondata management issues but also on fixing data quality problems. It is out of the scope of this book to mention all the tools for each data preprocessing topic. As an example, we concentrate on one particular topic—the handling of missing values. The VIMpackage(Templ, Alfons, and Filzmoser 2011)(Kowarik and Templ 2016)can be used for visual inspection and imputation of data. It is possible to visualize missing values using suitable plot methods and to analyze missing values' structure in microdata using univariate, bivariate, multiple, and multivariate plots. The information on missing values from specified variables is highlighted in selected variables. VIM can also evaluate imputations visually. Moreover, the VIMGUIpackage(Schopfhauser et al., 2014)provides a point and click graphical user interface (GUI). One plot, a parallel coordinate plot, for missing values is shown in the following graph. It highlights the values on certain chemical elements. In red, those values are marked that contain the missing in the chemical element Bi. It is easy to see missing at random situations with such plots as well as to detect any structure according to the missing pattern. Note that this data is compositional thus transformed using a log-ratio transformation from the package robCompositions(Templ, Hron, and Filzmoser 2011): library("VIM") data(chorizonDL, package = "VIM") ## for missing values x <- chorizonDL[,c(15,101:110)] library("robCompositions") x <- cenLR(x)$x.clr parcoordMiss(x,     plotvars=2:11, interactive = FALSE) legend("top", col = c("skyblue", "red"), lwd = c(1,1),     legend = c("observed in Bi", "missing in Bi")) To impute missing values,not onlykk-nearest neighbor and hot-deck methods are included, but also robust statistical methods implemented in an EMalgorithm, for example, in the functionirmi. The implemented methods can deal with a mixture of continuous, semi-continuous, binary, categorical, and count variables: any(is.na(x)) ## [1] TRUE ximputed <- irmi(x) ## Time difference of 0.01330566 secs any(is.na(ximputed)) ## [1] FALSE Visualization as a must While in former times, results were presented mostly in tables and data was analyzed by their values on screen; nowadays visualization of data and results becomes very important. Data scientists often heavily use visualizations to analyze data andalso for reporting and presenting results. It's already a nogo to not make use of visualizations. R features not only it's traditional graphical system but also an implementation of the grammar of graphics book(Wilkinson 2005)in the form of the R package(Wickham 2009). Why a data scientist should make use of ggplot2? Since it is a very flexible, customizable, consistent, and systematic approach to generate graphics. It allows to define own themes (for example, cooperative designs in companies) and support the users with legends and optimal plot layout. In ggplot2, the parts of a plot are defined independently. We do not go into details and refer to(Wickham 2009)or(???), but here's a simple example to show the user-friendliness of the implementation: library("ggplot2") ggplot(Cars93, aes(x = Horsepower, y = MPG.city)) + geom_point() + facet_wrap(~Cylinders) Here, we mapped Horsepower to the x variable and MPG.city to the y variable. We used Cylinder for faceting. We usedgeom_pointto tell ggplot2 to produce scatterplots. Reporting and webapplications Every analysis and report should be reproducible, especially when a data scientist does the job. Everything from the past should be able to compute at any time thereafter. Additionally,a task for a data scientist is to organize and managetext,code,data, andgraphics. The use of dynamical reporting tools raise the quality of outcomes and reduce the work-load. In R, the knitrpackage provides functionality for creating reproducible reports. It links code and text elements. The code is executed and the results are embedded in the text. Different output formats are possible such as PDF,HTML, orWord. The structuring can be most simply done using rmarkdown(Allaire et al., 2015). markdown is a markup language with many features, including headings of different sizes, text formatting, lists, links, HTML, JavaScript,LaTeX equations, tables, and citations. The aim is to generate documents from plain text. Cooperate designs and styles can be managed through CSS stylesheets. For data scientists, it is highly recommended to use these tools in their daily work. We already mentioned the automated generation from HTML pages from plain text with rmarkdown. The shinypackage(RStudio Inc. 2014)allows to build web-based applications. The website generated with shiny changes instantly as users modify inputs. You can stay within the R environment to build shiny user interfaces. Interactivity can be integrated using JavaScript, and built-in support for animation and sliders. Following is a very simple example that includes a slider and presents a scatterplot with highlighting of outliers given. We do not go into detail on the code that should only prove that it is just as simple to make a web application with shiny: library("shiny") library("robustbase") ## Define server code server <- function(input, output) {   output$scatterplot <- renderPlot({     x <- c(rnorm(input$obs-10), rnorm(10, 5)); y <- x + rnorm(input$obs)     df <- data.frame("x" = x, "y" = y)     df$out <- ifelse(covMcd(df)$mah > qchisq(0.975, 1), "outlier", "non-outlier")     ggplot(df, aes(x=x, y=y, colour=out)) + geom_point()   }) }   ## Define UI ui <- fluidPage(   sidebarLayout(     sidebarPanel(       sliderInput("obs", "No. of obs.", min = 10, max = 500, value = 100, step = 10)     ),     mainPanel(plotOutput("scatterplot"))   ) )   ## Shiny app object shinyApp(ui = ui, server = server) Building R packages First, RStudio and the package devtools(Wickham and Chang 2016)make life easy when building packages. RStudio has a lot of facilities for package building, and it's integrated package devtools includes features for checking, building, and documenting a package efficiently, and includes roxygen2(Wickham, Danenberg, and Eugster)for automated documentation of packages. When code of a package is updated,load_all('pathToPackage')simulates a restart of R, the new installation of the package and the loading of the newly build packages. Note that there are many other functions available for testing, documenting, and checking. Secondly, build a package whenever you wrote more than two functions and whenever you deal with more than one data set. If you use it only for yourself, you may be lazy with documenting the functions to save time. Packages allow to share code easily, to load all functions and data with one line of code, to have the documentation integrated, and to support consistency checks and additional integrated unit tests. Advice for beginners is to read the manualWriting R Extensions, and use all the features that are provided by RStudio and devtools. Summary In this article, we discussed essential tools for data scientists in R. This covers methods for data pre-processing, data manipulation, and tools for reporting, reproducible work, visualization, R packaging, and writing web-applications. A data scientist should learn to use the presented tools and deepen the knowledge in the proposed methods and software tools. Having learnt these lessons, a data scientist is well-prepared to face the challenges in data analysis, data analytics, data science, and data problems in practice. References Allaire, J.J., J. Cheng, Xie Y, J. McPherson, W. Chang, J. Allen, H. Wickham, and H. Hyndman. 2015.Rmarkdown: Dynamic Documents for R.http://CRAN.R-project.org/package=rmarkdown. Bache, S.M., and W. Wickham. 2014.magrittr: A Forward-Pipe Operator for R.https://CRAN.R-project.org/package=magrittr. Dowle, M., A. Srinivasan, T. Short, S. Lianoglou, R. Saporta, and E. Antonyan. 2015.Data.table: Extension of Data.frame.https://CRAN.R-project.org/package=data.table. Gentlemen, R. 2009. "Data Analysts Captivated by R's Power."New York Times.http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html. Godfrey, A.J.R. 2013. "Statistical Analysis from a Blind Person's Perspective."The R Journal5 (1): 73–80. Kowarik, A., and M. Templ. 2016. "Imputation with the R Package VIM."Journal of Statistical Software. Mirai Solutions GmbH. 2015.XLConnect: Excel Connector for R.http://CRAN.R-project.org/package=XLConnect. R Core Team. 2015.Foreign: Read Data Stored by Minitab, S, SAS, SPSS, Stata, Systat, Weka, dBase, ….http://CRAN.R-project.org/package=foreign. Ripley, B., and M. Lapsley. 2015.RODBC: ODBC Database Access.http://CRAN.R-project.org/package=RODBC. RStudio Inc. 2014.Shiny: Web Application Framework for R.http://CRAN.R-project.org/package=shiny. Schopfhauser, D., M. Templ, A. Alfons, A. Kowarik, and B. Prantner. 2014.VIMGUI: Visualization and Imputation of Missing Values.http://CRAN.R-project.org/package=VIMGUI. Templ, M., A. Alfons, and P. Filzmoser. 2011. "Exploring Incomplete Data Using Visualization Techniques."Advances in Data Analysis and Classification6 (1): 29–47. Templ, M., and V. Todorov. 2016. "The Software Environment R for Official Statistics and Survey Methodology."Austrian Journal of Statistics45 (1): 97–124. Templ, M., K. Hron, and P. Filzmoser. 2011.RobCompositions: An R-Package for Robust Statistical Analysis of Compositional Data. John Wiley; Sons. Tippmann, S. 2015. "Programming Tools: Adventures with R."Nature, 109–10. doi:10.1038/517109a. Todorov, V., and M. Templ. 2012.R in the Statistical Office: Part II. Working paper 1/2012. United Nations Industrial Development. Wickham, H. 2009.Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.http://had.co.nz/ggplot2/book. 2015a.Xml2: Parse XML.http://CRAN.R-project.org/package=xml2. 2015b.Readxl: Read Excel Files.http://CRAN.R-project.org/package=readxl. Wickham, H., and W. Chang. 2016.Devtools: Tools to Make Developing R Packages Easier.https://CRAN.R-project.org/package=devtools. Wickham, H., and R. Francois. 2015a.Readr: Read Tabular Data.http://CRAN.R-project.org/package=readr. 2015b.dplyr: A Grammar of Data Manipulation.https://CRAN.R-project.org/package=dplyr. Wickham, H., and E. Miller. 2015.Haven: Import SPSS,Stata and SAS Files.http://CRAN.R-project.org/package=haven. Wickham, H., P. Danenberg, and M. Eugster.Roxygen2: In-Source Documentation for R.https://github.com/klutometis/roxygen. Wilkinson, L. 2005.The Grammar of Graphics (Statistics and Computing). Secaucus, NJ, USA: Springer-Verlag New York, Inc. Resources for Article: Further resources on this subject: Adding Media to Our Site [article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] JavaScript Execution with Selenium [article]
Read more
  • 0
  • 0
  • 3621
article-image-voice-interaction-and-android-marshmallow
Raka Mahesa
30 Jun 2016
6 min read
Save for later

Voice Interaction and Android Marshmallow

Raka Mahesa
30 Jun 2016
6 min read
"Jarvis, play some music." You might imagine that to be a quote from some Iron Man stories (and hey, that might be an actual quote), but if you replace the "Jarvis" part with "OK Google," you'll get an actual line that you can speak to your Android phone right now that will open a music player and play a song. Go ahead and try it out yourself. Just make sure you're on your phone's home screen when you do it. This feature is called Voice Action, and it was actually introduced years ago in 2010, though back then it only worked on certain apps. However, Voice Action only accepts a single-line voice command, unlike Jarvis who usually engages in a conversation with its master. For example, if you ask Jarvis to play music, it will probably reply by asking what music you want to play. Fortunately, this type of conversation will no longer be limited to movies or comic books, because with Android Marshmallow, Google has introduced an API for that: the Voice Interaction API. As the name implies, the Voice Interaction API enables you to add voice-based interaction to its app. When implemented properly, the user will be able to command his/her phone to do a particular task without any touch interaction just by having a conversation with the phone. Pretty similar to Jarvis, isn't it? So, let's try it out! One thing to note before beginning: the Voice Interaction API can only be activated if the app is launched using Voice Action. This means that if the app is opened from the launcher via touch, the API will return a null object and cannot be used on that instance. So let’s cover a bit of Voice Action first before we delve further into using the Voice Interaction API. Requirements To use the Voice Interaction API, you need: Android Studio v1.0 or above Android 6.0 (API 23) SDK A device with Android Marshmallow installed (optional) Voice Action Let's start by creating a new project with a blank activity. You won’t use the app interface and you can use the terminal logging to check what app does, so it's fine to have an activity with no user interface here. Okay, you now have the activity. Let’s give the user the ability to launch it using a voice command. Let's pick a voice command for our app—such as a simple "take a picture" command? This can be achieved by simply adding intent filters to the activity. Add these lines to your app manifest file and put them below the original intent filter of your app activity. <intent-filter> <action android_name="android.media.action.STILL_IMAGE_CAMERA" /> <category android_name="android.intent.category.DEFAULT" /> <category android_name="android.intent.category.VOICE" /> </intent-filter> These lines will notify the operating system that your activity should be triggered when a certain voice command is spoken. The action "android.media.action.STILL_IMAGE_CAMERA" is associated with the "take a picture" command, so to activate the app using a different command, you need to specify a different action. Check out this list if you want to find out what other commands are supported. And that's all you need to do to implement Voice Action for your app. Build the app and run it on your phone. So when you say "OK Google, take a picture", your activity will show up. Voice Interaction All right, let's move on to Voice Interaction. When the activity is created, before you start the voice interaction part, you must always check whether the activity was started from Voice Action and whether the VoiceInteractor service is available. To do that, call the isVoiceInteraction() function to check the returned value. If it returns true, then it means the service is available for you to use. Let's say you want your app to first ask the user which side he/she is on, then changes the app background color accordingly. If the user chooses the dark side, the color will be black, but if the user chooses the light side, the app color will be white. Sounds like a simple and fun app, doesn't it? So first, let’s define what options are available for users to choose. You can do this by creating an instance of VoiceInteractor.PickOptionRequest.Option for each available choice. Note that you can associate more than one word with a single option, as can be seen in the following code. VoiceInteractor.PickOptionRequest.Option option1 = new VoiceInteractor.PickOptionRequest.Option(“Light”, 0); option1.addSynonym(“White”); option1.addSynonym(“Jedi”); VoiceInteractor.PickOptionRequest.Option option2 = new VoiceInteractor.PickOptionRequest.Option(“Dark”, 1); option12addSynonym(“Black”); option2.addSynonym(“Sith”); The next step is to define a Voice Interaction request and tell the VoiceInteractor service to execute that requests. For this app, use the PickOptionRequest for the request object. You can check out other request types on this page. VoiceInteractor.Option[] options = new VoiceInteractor.Option[] { option1, option2 } VoiceInteractor.Prompt prompt = new VoiceInteractor.Prompt("Which side are you on"); getVoiceInteractor().submitRequest(new PickOptionRequest(prompt, options, null) { //Handle each option here }); And determine what to do based on the choice picked by the user. This time, we'll simply check the index of the selected option and change the app background color based on that (we won't delve into how to change the app background color here; let's leave it for another occasion). @Override public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) { if (finished && selections.length == 1) { if (selections[0].getIndex() == 0) changeBackgroundToWhite(); else if (selections[0].getIndex() == 1) changeBackgroundToBlack(); } } @Override public void onCancel() { closeActivity(); } And that's it! When you run your app on your phone, it should ask which side you're on if you launch it using Voice Action. You've only learned the basics here, but this should be enough to add a little voice interactivity to your app. And if you ever want to create a Jarvis version, you just need to add "sir" to every question your app asks. About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also tweets regularly as @legacy99.
Read more
  • 0
  • 0
  • 15623

article-image-angular-2-components-making-app-development-easier
Mary Gualtieri
30 Jun 2016
5 min read
Save for later

Angular 2 Components: Making app development easier

Mary Gualtieri
30 Jun 2016
5 min read
When Angular 2 was announced in October 2014, the JavaScript community went into a frenzy. What was going to happen to the beloved Angular 1, which so many developers loved? Could change be a good thing? Change only means improvement, so we should welcome Angular 2 with open arms and embrace it. One of the biggest changes from Angular 1 to Angular 2 was the purpose of a component. In Angular 2, components are the main way to build and specify logic on a page; or in other words, components are where you define a specific responsibility of your application. In Angular 1, this was achieved through directives, controllers, and scope. With this change, Angular 2 offers better functionality and actually makes it easier to build applications. Angular 2 components also ensure that code from different sections of the application will not interfere with other sections of the component. To build a component, let’s first look at what is needed to create it: An association of DOM/host elements. Well-defined input and output properties of public APIs. A template that describes how the component is rendered on the HTML page. Configuration of the dependency injection. Input and output properties Input and Output properties are considered the public API of a component, allowing you to access the backend data. The input property is where data flows into a component. Data flows out of the component through the output property. The purpose of the input and output propertiesis to represent a component in your application. Template A template is needed in order to render a component on a page. In order to render the template, you must have a list of directives that can be used in the template. Host elements In order for a component to be rendered in DOM, the component must associate itself with a DOM or host element. A component can interact with host elements by listening to its events, updating properties, and invoking methods. Dependency Injection Dependency Injection is when a component depends on a service. You request this service through a constructor, and the framework provides you with this service. This is significant because you can depend on interfaces instead of concrete types. The benefit of this enables testability and more control. Dependency Injections is created and configured in directives and component decorators. Bootstrap In Angular,you have to bootstrap in order to initialize your application through automation or by manually initializing it. In Angular 1, to automatically bootstrap your app, you added ng-app into your HTML file. To manually bootstrap your app, you would add angular.bootstrap(document, [‘myApp’});. In Angular 2, you can bootstrap by just adding bootstrap();. It’s important to remember that bootstrapping in Angular is completely differentfromTwitter Bootstrap. Directives Directives are essentially components without a template. The purpose behind a directive is to allow components to interact with one another. Another way to think of a directive is a component with a template. You still have the option to write a directive with a decorator. Selectors Selectors are very easy to understand. Use a selector in order for Angular to identify the component. The selector is used to call the component into the HTML file. For example, if your selector is called App, you can use <app> </app> to call the component in the HTML file. Let’s build a simple component! Let’s walk through the steps required to actually build a component using Angular 2: Step 1: add a component decorator: Step 2: Add a selector: In your HTML file, use <myapp></myapp> to call the template. Step 3: Add a template: Step 4: Add a class to represent the component: Step 5: Bootstrap the component class: Step 6: Finally, import both the bootstrap and Component file: This is a root component. In Angular, you have what is called a component tree. Everything comes back to the component tree. The question that you must ask yourself is: what does a component looks like if it is not a root component? Perform the following steps for this: Step 1: Add an import component: For every component that you create, it is important to add “import {Component} from "angular2/core";” Step 2: Add a selector and a template: Step 3: Export the class that represents the component: Step 4: Switch to the root component. Then, import the component file: Add the relative path(./todo) to the file. Step 5: Add an array of directives to the root component in order to be able to use the component: Let’s review. In order to make a component, you must associate host elements, have well-defined input and output properties, have a template, and configure Dependency Injection. This is all achieved through the use of selectors, directives, and a template. Angular 2 is still in the beta stages, but once it arrives, it will be a game changer for the development world. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri
Read more
  • 0
  • 0
  • 5898
Modal Close icon
Modal Close icon