Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-android-game-development-unity3d
Packt
23 Nov 2016
8 min read
Save for later

Android Game Development with Unity3D

Packt
23 Nov 2016
8 min read
In this article by Wajahat Karim, author of the book Mastering Android Game Development with Unity, we will be creating addictive fun games by using a very famous game engine called Unity3D. In this article, we will cover the following topics: Game engines and Unity3D Features of Unity3D Basics of Unity game development (For more resources related to this topic, see here.) Game engines and Unity3D A game engine is a software framework designed for the creation and development of video games. Many tools and frameworks are available for game designers and developers to code a game quickly and easily without building from the ground up. As time passed by, game engines became more mature and easy for developers, with feature-rich environments. Starting from native code frameworks for Android such as AndEngine, Cocos2d-x, LibGDX, and so on, game engines started providing clean user interfaces and drag-drop functionalities to make game development easier for developers. These engines include lots of tools which are different in user interface, features, porting, and many more things; but all have one thing in common— they create video games in the end. Unity (http://unity3d.com) is a cross-platform game engine developed by Unity Technologies. It made its first public announcement at Apple Worldwide Developers Conference in 2005, and supported only game development for Mac OS, but since then it has been extended to target more than 15 platforms for desktop, mobile, and consoles. It is notable for its one-click ability to port games on multiple platforms including BlackBerry 10, Windows Phone 8, Windows, OS X, Linux, Android, iOS, Unity Web Player (including Facebook), Adobe Flash, PlayStation 3, PlayStation 4, PlayStation Vita, Xbox 360, Xbox One, Wii U, and Wii. Unity has a fantastic interface, which lets the developers manage the project really efficiently from the word go. It has a nice drag-drop functionality with connecting behavior scripts written in either C#, JavaScript (or UnityScript), or Boo to define the custom logic and functionality with visual objects quite easily. Unity has been proven quite easy to learn for new developers who are just starting out with game development. Now more largely studios have also started using , and that too for good reasons. Unity is one of those engines that provide support for both 2D and 3D games without putting developers in trouble or confusing them. Due to its popularity all over the game development industry, it has a vast collection of online tutorials, great documentation, and a very helping community of developers. Features of Unity3D Unity is a game development ecosystem comprising a powerful rendering engine, intuitive tools, rapid workflows for 2D and 3D games, all-in-one deployment support, and thousands of already created free and paid assets with a helping developer's community. The feature list includes the following: Easy workflow allowing developers to rapidly assemble scenes in an intuitive editor workspace Quality game creation like AAA visuals, high-definition audio, full-throttle action without any glitches on screen Dedicated tools for both 2D and 3D game creation with shared conventions to make it easy for developers A very unique and flexible animation system to create natural animations with very less time-consuming efforts Smooth frame rate with reliable performance on all the platforms where developers publish their games One-click ability to deploy to all platforms from desktops, browsers, and mobiles to consoles within minutes Reduces time of development by using already created reusable assets available at the huge asset store Basics of Unity game development Before delving into details of Unity3D and game development concepts, let's have a look at some of the very basics of Unity 5.0. We will go through the Unity interface, menu items, using assets, creating scenes, and publishing builds. Unity editor interface When you launch Unity 5.0 for the first time, you will be presented with an editor with a few panels on the left, right, and bottom of the screen. The following screenshot shows the editor interface when it's first launched: Fig 1.7 Unity 5 editor interface at first launch First of all, take your time to look over the editor, and become a little familiar with it. The Unity editor is divided into different small panels and views, which can be dragged to customize the workspace according to the developer/designer's needs. Unity 5 comes with some prebuilt workspace layout templates, which can be selected from the Layout drop-down menu at top-right corner of the screen, as shown in the following screenshot: Fig 1.8 Unity 5 editor layouts The layout currently displayed in the editor shown in the preceding screenshot is the Default layout. You can select these layouts, and see how the editor's interface changes, and how the different panels are placed at different positions in each layout. This book uses the 2 by 3 workspace layout for the whole game. The following figure shows the 2 by 3 workspace with the names of the views and panels highlighted: Fig 1.9 Unity 5 2 by 3 Layout with views and panel names As you can see in the preceding figure, the Unity editor contains different views and panels. Every panel and view have a specific purpose, which is described as follows: Scene view The Scene view is the whole stage for the game development, and it contains every asset in the game from a tiny point to any heavy 3D model. The Scene view is used to select and position environments, characters, enemies, the player, camera, and all other objects which can be placed on the stage for the game. All those objects which can be placed and shown in the game are called game objects. The Scene view allows developers to manipulate game objects such as selecting, scaling, rotating, deleting, moving, and so on. It also provides some controls such as navigation and transformation.  In simple words, the Scene view is the interactive sandbox for developers and designers. Game view The Game view is the final representation of how your game will look when published and deployed on the target devices, and it is rendered from the cameras of the scene. This view is connected to the play mode navigation bar in the center at the top of the whole Unity workspace. The play mode navigation bar is shown in the following: figure. Fig 1.14 Play mode bar When the game is played in the editor, this control bar gets changed to blue color. A very interesting feature of Unity is that it allows developers to pause the game and code while running, and the developers can see and change the properties, transforms, and much more at runtime, without recompiling the whole game, for quick workflow. Hierarchy view The Hierarchy view is the first point to select or handle any game object available in the scene. This contains every game object in the current scene. It is tree-type structure, which allows developers to utilize the parent and child concept on the game objects easily. The following figure shows a simple Hierarchy view: Fig 1.16 Hierarchy view Project browser panel This panel looks like a view, but it is called the Project browser panel. It is an embedded files directory in Unity, and contains all the files and folders included in the game project. The following figure shows a simple Project browser panel: Fig 1.17 Project browser panel The left side of the panel shows a hierarchal directory, while the rest of the panel shows the files, or, as they are called, assets in Unity. Unity represents these files with different icons to differentiate these according to their file types. These files can be sprite images, textures, model files, sounds, and so on. You can search any specific file by typing in the search text box. On the right side of search box, there are button controls for further filters such as animation files, audio clip files, and so on. An interesting thing about the Project browser panel is that if any file is not available in the Assets, then Unity starts looking for it on the Unity Asset Store, and presents you with the available free and paid assets. Inspector panel This is the most important panel for development in Unity. Unity structures the whole game in the form of game objects and assets. These game objects further contain components such as transform, colliders, scripts, meshes, and so on. Unity lets developers manage these components of each game object through the Inspector panel. The following figure shows a simple Inspector panel of a game object: Fig 1.18 Inspector panel These components vary in types, for example, Physics, Mesh, Effects, Audio, UI, and so on. These components can be added in any object by selecting it from the Component menu. The following figure shows the Component menu: Fig 1.19 Components menu Summary In this article, you learned about game engines, such as Unity3D, which is used to create games for Android devices. We also discussed the important features of Unity along with the basics of its development environment. Resources for Article: Further resources on this subject: The Game World [article] Customizing the Player Character [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 13145

article-image-debugging-vulkan
Packt
23 Nov 2016
16 min read
Save for later

Debugging in Vulkan

Packt
23 Nov 2016
16 min read
In this article by Parminder Singh, author of Learning Vulkan, we learn Vulkan debugging in order to avoid unpleasant mistakes. Vulkan allows you to perform debugging through validation layers. These validation layer checks are optional and can be injected into the system at runtime. Traditional graphics APIs perform validation right up front using some sort of error-checking mechanism, which is a mandatory part of the pipeline. This is indeed useful in the development phase, but actually, it is an overhead during the release stage because the validation bugs might have already been fixed at the development phase itself. Such compulsory checks cause the CPU to spend a significant amount of time in error checking. On the other hand, Vulkan is designed to offer maximum performance, where the optional validation process and debugging model play a vital role. Vulkan assumes the application has done its homework using the validation and debugging capabilities available at the development stage, and it can be trusted flawlessly at the release stage. In this article, we will learn the validation and debugging process of a Vulkan application. We will cover the following topics: Peeking into Vulkan debugging Understanding LunarG validation layers and their features Implementing debugging in Vulkan (For more resources related to this topic, see here.) Peeking into Vulkan debugging Vulkan debugging validates the application implementation. It not only surfaces the errors, but also other validations, such as proper API usage. It does so by verifying each parameter passed to it, warning about the potentially incorrect and dangerous API practices in use and reporting any performance-related warnings when the API is not used optimally. By default, debugging is disabled, and it's the application's responsibility to enable it. Debugging works only for those layers that are explicitly enabled at the instance level at the time of the instance creation (VkInstance). When debugging is enabled, it inserts itself into the call chain for the Vulkan commands the layer is interested in. For each command, the debugging visits all the enabled layers and validates them for any potential error, warning, debugging information, and so on. Debugging in Vulkan is simple. The following is an overview that describes the steps required to enable it in an application: Enable the debugging capabilities by adding the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension at the instance level. Define the set of the validation layers that are intended for debugging. For example, we are interested in the following layers at the instance and device level. For more information about these layer functionalities, refer to the next section: VK_LAYER_GOOGLE_unique_objects VK_LAYER_LUNARG_api_dump VK_LAYER_LUNARG_core_validation VK_LAYER_LUNARG_image VK_LAYER_LUNARG_object_tracker VK_LAYER_LUNARG_parameter_validation VK_LAYER_LUNARG_swapchain VK_LAYER_GOOGLE_threading The Vulkan debugging APIs are not part of the core command, which can be statically loaded by the loader. These are available in the form of extension APIs that can be retrieved at runtime and dynamically linked to the predefined function pointers. So, as the next step, the debug extension APIs vkCreateDebugReportCallbackEXT and vkDestroyDebugReportCallbackEXT are queried and linked dynamically. These are used for the creation and destruction of the debug report. Once the function pointers for the debug report are retrieved successfully, the former API (vkCreateDebugReportCallbackEXT) creates the debug report object. Vulkan returns the debug reports in a user-defined callback, which has to be linked to this API. Destroy the debug report object when debugging is no more required. Understanding LunarG validation layers and their features The LunarG Vulkan SDK supports the following layers for debugging and validation purposes. In the following points, we have described some of the layers that will help you understand the offered functionalities: VK_LAYER_GOOGLE_unique_objects: Non-dispatchable handles are not required to be unique; a driver may return the same handle for multiple objects that it considers equivalent. This behavior makes the tracking of the object difficult because it is not clear which object to reference at the time of deletion. This layer packs the Vulkan objects into a unique identifier at the time of creation and unpacks them when the application uses it. This ensures there is proper object lifetime tracking at the time of validation. As per LunarG's recommendation, this layer must be last in the chain of the validation layer, making it closer to the display driver. VK_LAYER_LUNARG_api_dump: This layer is helpful in knowing the parameter values passed to the Vulkan APIs. It prints all the data structure parameters along with their values. VK_LAYER_LUNARG_core_validation: This is used for validating and printing important pieces of information from the descriptor set, pipeline state, dynamic state, and so on. This layer tracks and validates the GPU memory, object binding, and command buffers. Also, it validates the graphics and compute pipelines. VK_LAYER_LUNARG_image: This layer can be used for validating texture formats, rendering target formats, and so on. For example, it verifies whether the requested format is supported on the device. It validates whether the image view creation parameters are reasonable for the image that the view is being created for. VK_LAYER_LUNARG_object_tracker: This keeps track of object creation along with its use and destruction, which is helpful in avoiding memory leaks. It also validates that the referenced object is properly created and is presently valid. VK_LAYER_LUNARG_parameter_validation: This validation layer ensures that all the parameters passed to the API are correct as per the specification and are up to the required expectation. It checks whether the value of a parameter is consistent and within the valid usage criteria defined in the Vulkan specification. Also, it checks whether the type field of a Vulkan control structure contains the same value that is expected for a structure of that type. VK_LAYER_LUNARG_swapchain: This layer validates the use of the WSI swapchain extensions. For example, it checks whether the WSI extension is available before its functions could be used. Also, it validates that an image index is within the number of images in a swapchain. VK_LAYER_GOOGLE_threading: This is helpful in the context of thread safety. It checks the validity of multithreaded API usage. This layer ensures the simultaneous use of objects using calls running under multiple threads. It reports threading rule violations and enforces a mutex for such calls. Also, it allows an application to continue running without actually crashing, despite the reported threading problem. VK_LAYER_LUNARG_standard_validation: This enables all the standard layers in the correct order. For more information on validation layers, visit LunarG's official website. Check out https://vulkan.lunarg.com/doc/sdk and specifically refer to the Validation layer details section for more details. Implementing debugging in Vulkan Since debugging is exposed by validation layers, most of the core implementation of the debugging will be done under the VulkanLayerAndExtension class (VulkanLED.h/.cpp). In this section, we will learn about the implementation that will help us enable the debugging process in Vulkan: The Vulkan debug facility is not part of the default core functionalities. Therefore, in order to enable debugging and access the report callback, we need to add the necessary extensions and layers: Extension: Add the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension to the instance level. This will help in exposing the Vulkan debug APIs to the application: vector<const char *> instanceExtensionNames = { . . . . // other extensios VK_EXT_DEBUG_REPORT_EXTENSION_NAME, }; Layer: Define the following layers at the instance level to allow debugging at these layers: vector<const char *> layerNames = { "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation", "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation", "VK_LAYER_LUNARG_swapchain", “VK_LAYER_GOOGLE_unique_objects” }; In addition to the enabled validation layers, the LunarG SDK provides a special layer called VK_LAYER_LUNARG_standard_validation. This enables basic validation in the correct order as mentioned here. Also, this built-in metadata layer loads a standard set of validation layers in the optimal order. It is a good choice if you are not very specific when it comes to a layer. a) VK_LAYER_GOOGLE_threading b) VK_LAYER_LUNARG_parameter_validation c) VK_LAYER_LUNARG_object_tracker d) VK_LAYER_LUNARG_image e) VK_LAYER_LUNARG_core_validation f) VK_LAYER_LUNARG_swapchain g) VK_LAYER_GOOGLE_unique_objects These layers are then supplied to the vkCreateInstance() API to enable them: VulkanApplication* appObj = VulkanApplication::GetInstance(); appObj->createVulkanInstance(layerNames, instanceExtensionNames, title); // VulkanInstance::createInstance() VkResult VulkanInstance::createInstance(vector<const char *>& layers, std::vector<const char *>& extensionNames, char const*const appName) { . . . VkInstanceCreateInfo instInfo = {}; // Specify the list of layer name to be enabled. instInfo.enabledLayerCount = layers.size(); instInfo.ppEnabledLayerNames = layers.data(); // Specify the list of extensions to // be used in the application. instInfo.enabledExtensionCount = extensionNames.size(); instInfo.ppEnabledExtensionNames = extensionNames.data(); . . . vkCreateInstance(&instInfo, NULL, &instance); } The validation layer is very specific to the vendors and SDK version. Therefore, it is advisable to first check whether the layers are supported by the underlying implementation before passing them to the vkCreateInstance() API. This way, the application remains portable throughout when ran against another driver implementation. The areLayersSupported() is a user-defined utility function that inspects the incoming layer names against system-supported layers. The unsupported layers are informed to the application and removed from the layer names before feeding them into the system: // VulkanLED.cpp VkBool32 VulkanLayerAndExtension::areLayersSupported (vector<const char *> &layerNames) { uint32_t checkCount = layerNames.size(); uint32_t layerCount = layerPropertyList.size(); std::vector<const char*> unsupportLayerNames; for (uint32_t i = 0; i < checkCount; i++) { VkBool32 isSupported = 0; for (uint32_t j = 0; j < layerCount; j++) { if (!strcmp(layerNames[i], layerPropertyList[j]. properties.layerName)) { isSupported = 1; } } if (!isSupported) { std::cout << "No Layer support found, removed” “ from layer: "<< layerNames[i] << endl; unsupportLayerNames.push_back(layerNames[i]); } else { cout << "Layer supported: " << layerNames[i] << endl; } } for (auto i : unsupportLayerNames) { auto it = std::find(layerNames.begin(), layerNames.end(), i); if (it != layerNames.end()) layerNames.erase(it); } return true; } The debug report is created using the vkCreateDebugReportCallbackEXT API. This API is not a part of Vulkan's core commands; therefore, the loader is unable to link it statically. If you try to access it in the following manner, you will get an undefined symbol reference error: vkCreateDebugReportCallbackEXT(instance, NULL, NULL, NULL); All the debug-related APIs need to be queried using the vkGetInstanceProcAddr() API and linked dynamically. The retrieved API reference is stored in a corresponding function pointer called PFN_vkCreateDebugReportCallbackEXT. The VulkanLayerAndExtension::createDebugReportCallback() function retrieves the create and destroy debug APIs, as shown in the following implementation: /********* VulkanLED.h *********/ // Declaration of the create and destroy function pointers PFN_vkCreateDebugReportCallbackEXT dbgCreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT dbgDestroyDebugReportCallback; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Get vkCreateDebugReportCallbackEXT API dbgCreateDebugReportCallback=(PFN_vkCreateDebugReportCallbackEXT) vkGetInstanceProcAddr(*instance,"vkCreateDebugReportCallbackEXT"); if (!dbgCreateDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkCreateDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } // Get vkDestroyDebugReportCallbackEXT API dbgDestroyDebugReportCallback= (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr (*instance, "vkDestroyDebugReportCallbackEXT"); if (!dbgDestroyDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkDestroyDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } . . . } The vkGetInstanceProcAddr() API obtains the instance-level extensions dynamically; these extensions are not exposed statically on a platform and need to be linked through this API dynamically. The following is the signature of this API: PFN_vkVoidFunction vkGetInstanceProcAddr( VkInstance instance, const char* name); The following table describes the API fields: Parameters Description instance This is a VkInstance variable. If this variable is NULL, then the name must be one of these: vkEnumerateInstanceExtensionProperties, vkEnumerateInstanceLayerProperties, or vkCreateInstance. name This is the name of the API that needs to be queried for dynamic linking.   Using the dbgCreateDebugReportCallback()function pointer, create the debugging report object and store the handle in debugReportCallback. The second parameter of the API accepts a VkDebugReportCallbackCreateInfoEXT control structure. This data structure defines the behavior of the debugging, such as what should the debug information include—errors, general warnings, information, performance-related warning, debug information, and so on. In addition, it also takes the reference of a user-defined function (debugFunction); this helps filter and print the debugging information once it is retrieved from the system. Here's the syntax for creating the debugging report: struct VkDebugReportCallbackCreateInfoEXT { VkStructureType type; const void* next; VkDebugReportFlagsEXT flags; PFN_vkDebugReportCallbackEXT fnCallback; void* userData; }; The following table describes the purpose of the mentioned API fields: Parameters Description type This is the type information of this control structure. It must be specified as VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT. flags This is to define the kind of debugging information to be retrieved when debugging is on; the next table defines these flags. fnCallback This field refers to the function that filters and displays the debug messages. The VkDebugReportFlagBitsEXT control structure can exhibit a bitwise combination of the following flag values: Insert table here The createDebugReportCallback function implements the creation of the debug report. First, it creates the VulkanLayerAndExtension control structure object and fills it with relevant information. This primarily includes two things: first, assigning a user-defined function (pfnCallback) that will print the debug information received from the system (see the next point), and second, assigning the debugging flag (flags) in which the programmer is interested: /********* VulkanLED.h *********/ // Handle of the debug report callback VkDebugReportCallbackEXT debugReportCallback; // Debug report callback create information control structure VkDebugReportCallbackCreateInfoEXT dbgReportCreateInfo = {}; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Define the debug report control structure, // provide the reference of 'debugFunction', // this function prints the debug information on the console. dbgReportCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgReportCreateInfo.pfnCallback = debugFunction; dbgReportCreateInfo.pUserData = NULL; dbgReportCreateInfo.pNext = NULL; dbgReportCreateInfo.flags = VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT | VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_DEBUG_BIT_EXT; // Create the debug report callback and store the handle // into 'debugReportCallback' result = dbgCreateDebugReportCallback (*instance, &dbgReportCreateInfo, NULL, &debugReportCallback); if (result == VK_SUCCESS) { cout << "Debug report callback object created successfullyn"; } return result; } Define the debugFunction() function that prints the retrieved debug information in a user-friendly way. It describes the type of debug information along with the reported message: VKAPI_ATTR VkBool32 VKAPI_CALL VulkanLayerAndExtension::debugFunction( VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData){ if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] ERROR: [" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] WARNING: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) { std::cout<<"[VK_DEBUG_REPORT] INFORMATION:[" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if(msgFlags& VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT){ cout <<"[VK_DEBUG_REPORT] PERFORMANCE: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) { cout << "[VK_DEBUG_REPORT] DEBUG: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else { return VK_FALSE; } return VK_SUCCESS; } The following table describes the various fields from the debugFunction()callback: Parameters Description msgFlags This specifies the type of debugging event that has triggered the call, for example, an error, warning, performance warning, and so on. objType This is the type object that is manipulated by the triggering call. srcObject This is the handle of the object that's being created or manipulated by the triggered call. location This refers to the place of the code describing the event. msgCode This refers to the message code. layerPrefix This is the layer responsible for triggering the debug event. msg This field contains the debug message text. userData Any application-specific user data is specified to the callback using this field.  The debugFunction callback has a Boolean return value. The true return value indicates the continuation of the command chain to subsequent validation layers even after an error is occurred. However, the false value indicates the validation layer to abort the execution when an error occurs. It is advisable to stop the execution at the very first error. Having an error itself indicates that something has occurred unexpectedly; letting the system run in these circumstances may lead to undefined results or further errors, which could be completely senseless sometimes. In the latter case, where the execution is aborted, it provides a better chance for the developer to concentrate and fix the reported error. In contrast, it may be cumbersome in the former approach, where the system throws a bunch of errors, leaving the developers in a confused state sometimes. In order to enable debugging at vkCreateInstance, provide dbgReportCreateInfo to the VkInstanceCreateInfo’spNext field: VkInstanceCreateInfo instInfo = {}; . . . instInfo.pNext = &layerExtension.dbgReportCreateInfo; vkCreateInstance(&instInfo, NULL, &instance); Finally, once the debug is no longer in use, destroy the debug callback object: void VulkanLayerAndExtension::destroyDebugReportCallback(){ VulkanApplication* appObj = VulkanApplication::GetInstance(); dbgDestroyDebugReportCallback(instance,debugReportCallback,NULL); } The following is the output from the implemented debug report. Your output may differ from this based on the GPU vendor and SDK provider. Also, the explanation of the errors or warnings reported are very specific to the SDK itself. But at a higher level, the specification will hold; this means you can expect to see a debug report with a warning, information, debugging help, and so on, based on the debugging flag you have turned on. Summary This article was short, precise, and full of practical implementations. Working on Vulkan without debugging capabilities is like shooting in the dark. We know very well that Vulkan demands an appreciable amount of programming and developers make mistakes for obvious reasons; they are humans after all. We learn from our mistakes, and debugging allows us to find and correct these errors. It also provides insightful information to build quality products. Let's do a quick recap. We learned the Vulkan debugging process. We looked at the various LunarG validation layers and understood the roles and responsibilities offered by each one of them. Next, we added a few selected validation layers that we were interested to debug. We also added the debug extension that exposes the debugging capabilities; without this, the API's definition could not be dynamically linked to the application. Then, we implemented the Vulkan create debug report callback and linked it to our debug reporting callback; this callback decorates the captured debug report in a user-friendly and presentable fashion. Finally, we implemented the API to destroy the debugging report callback object. Resources for Article: Further resources on this subject: Get your Apps Ready for Android N [article] Multithreading with Qt [article] Manage Security in Excel [article]
Read more
  • 0
  • 0
  • 35623

article-image-how-create-breakout-game-godot-engine-part-1
George Marques
22 Nov 2016
8 min read
Save for later

How to create a Breakout game with Godot Engine – Part 1

George Marques
22 Nov 2016
8 min read
The Godot Engine is a piece of open source software designed to help you make any kind of game. It possesses a great number of features to ease the work flow of game development. This two-part article will cover some basic features of the engine—such as physics and scripting—by showing you how to make a game with the mechanics of the classic Breakout. To install Godot, you can download it from the website and extract it in your place of preference. You can also install it through Steam. While the latter has a larger download, it has all the demos and export templates already installed. Setting up the project When you first open Godot, you are presented with the Project Manager window. In here, you can create new or import existing projects. It's possible to run a project directly from here without opening the editor itself. The Templates tab shows the available projects in the online Asset Library where you can find and download community-made content. Note that there might also be a console window, which shows some messages of what's happening on the engine, like warnings and error messages. This window must remain open but it's also helpful for debugging. It will not show up in your final exported version of the game. Project Manager To start creating our game, let's click on the New Project button. Using the dialog interface, navigate to where you want to place it, and then create a new folder just for the project. Select it and choose a name for the project ("Breakout" may be a good choice). Once you do that, the new project will be shown at the top of the list. You can double-click to open it in the editor. Creating the paddle You will see first the main screen of the Godot editor. I rearranged the docks to suit my preferences, but you can leave the default one or change it to something you like. If you click on the Settings button at the top-right corner, you can save and load layouts. Godot uses a system in which every object is a Node. A Scene is just a tree of nodes and can also be used as a "prefab", like other engines call it. Every scene can be instanced as a part of another scene. This helps in dividing the project and reusing the work. This is all explained in the documentation and you can consult it if you have any doubt. Main screen Now we are going to create a scene for the paddle, which can later be instanced on the game scene. I like to start with an object that can be controlled by the player so that we can start to feel what the interactivity looks like. On the Scene dock, click on the "+" (plus) button to add a new Node (pressing Ctrl + A will also work). You'll be presented with a large collection of Nodes to choose from, each with its own behavior. For the paddle, choose a StaticBody2D. The search field can help you find it easily. This will be root of the scene. Remember that a scene is a tree of nodes, so it needs a "root" to be its main anchor point. You may wonder why we chose a static body for the paddle since it will move. The reason for using this kind of node is that we don't want it to be moved by physical interaction. When the ball hits the paddle, we want it to be kept in the same place. We will move it only through scripting. Select the node you just added in the Scene dock and rename it to Paddle. Save the scene as paddle.tscn. Saving often is good to avoid losing your work. Add a new node of the type Sprite (not Sprite3D, since this is a 2D game). This is now a child of the root node in the tree. The sprite will serve as the image of the paddle. You can use the following image in it: Paddle Save the image in the project folder and use the FileSystem dock in the Godot editor to drag and drop into the Texture property on the Inspector dock. Any property that accepts a file can be set with drag and drop. Now the Static Body needs a collision area so that it can tell where other physics objects (like the ball) will collide. To do this, select the Paddle root node in the Scene dock and add a child node of type CollisionShape2D to it. The warning icon is there because we didn't set a shape to it yet, so let's do that now. On the Inspector dock, set the Shape property to a New RectangleShape2D. You can set the shape extents visually using the editor. Or you can click on the ">" button just in front of the Shape property on the Inspector to edit its properties and set the extents to (100, 15) if you're using the provided image. This is the half-extent, so the rectangle will be doubled in each dimension based on what you set here. User input While our paddle is mostly done, it still isn't controllable. Before we delve into the scripting world, let's set up the Input Map. This is a Godot functionality that allows us to abstract user input into named actions. You can later modify the keys needed to move the player later without changing the code. It also allows you to use multiple keys and joystick buttons for a single action, making the game work on keyboard and gamepad seamlessly. Click on the Project Settings option under the Scene menu in the top left of the window. There you can see the Input Map tab. There are some predefined actions, which are needed by UI controls. Add the actions move_left and move_right that we will use to move the paddle. Then map the left and right arrow keys of the keyboard to them. You can also add a mapping to the D-pad left and right buttons if you want to use a joystick. Close this window when done. Now we're ready to do some coding. Right-click on the Paddle root node in the Scene dock and select the option Add Script from the context menu. The "Create Node Script" dialog will appear and you can use the default settings, which will create a new script file with the same name as the scene (with a different extension). Godot has its own script language called "GDScript", which has a syntax a bit like Python and is quite easy to learn if you are familiar with programming. You can use the following code on the Paddle script: extends StaticBody2D # Paddle speed in pixels per second export var speed = 150.0 # The "export" keyword allows you to edit this value from the Inspector # Holds the limits off the screen var left_limit = 0 var right_limit = 0 # This function is called when this node enters the game tree # It is useful for initialization code func _ready(): # Enable the processing function for this node set_process(true) # Set the limits to the paddle based on the screen size left_limit = get_viewport_rect().pos.x + (get_node("Sprite").get_texture().get_width() / 2) right_limit = get_viewport_rect().pos.x + get_viewport_rect().size.x - (get_node("Sprite").get_texture().get_width() / 2) # The processing function func _process(delta): var direction = 0 if Input.is_action_pressed("move_left"): # If the player is pressing the left arrow, move to the left # which means going in the negative direction of the X axis direction = -1 elif Input.is_action_pressed("move_right"): # Same as above, but this time we go in the positive direction direction = 1 # Create a movement vector var movement = Vector2(direction * speed * delta, 0) # Move the paddle using vector arithmetic set_pos(get_pos() + movement) # Here we clamp the paddle position to not go off the screen if get_pos().x < left_limit: set_pos(Vector2(left_limit, get_pos().y)) elif get_pos().x > right_limit: set_pos(Vector2(right_limit, get_pos().y)) If you play the scene (using the top center bar or pressing F6), you can see the paddle and move it with the keyboard. You may find it too slow, but this will be covered in part two of this article when we set up the game scene. Up next You now have a project set up and a paddle on the screen that can be controlled by the player. You also have some understanding of how the Godot Engine operates with its nodes, scenes, and scripts. In part two, you will learn how to add the ball and the destroyable bricks. About the Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 1
  • 15004

article-image-installing-vcenter-site-recovery-manager-61
Packt
22 Nov 2016
3 min read
Save for later

Installing vCenter Site Recovery Manager 6.1

Packt
22 Nov 2016
3 min read
In this article by Abhilash G B, the author of the Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager - Second Editon, we will learn about Site Recovery Manager and its architecture. (For more resources related to this topic, see here.) What is Site Recovery Manager? vCenter Site Recovery Manager (SRM) is an orchestration software that is used to automate disaster recovery testing and failover. It can be configured to leverage either vSphere replication or a supported array-based replication. With SRM, you can create protection groups and run recovery plans against them. These recovery plans can then be used to test the Disaster Recovery (DR) setup, perform a planned failover, or be initiated during a DR. SRM is a not a product that performs an automatic failover, which means there is no intelligence built into SRM that would detect a disaster/outage and cause failover of the virtual machines (VMs). The DR process should be manually initiated. Hence, it is not a high-availability solution either, but purely a tool that orchestrates a recovery plan. The SRM architecture vCenter SRM is not a tool that works on its own. It needs to talk to other components in your vSphere environment. The following are the components that will be involved in an SRM-protected environment: SRM requires both the protected and the recovery sites to be managed by separate instances of vCenter Server. It also requires an SRM Instance at both the sites. SRM now uses PSC as an intermediary to fetch vCenter information. The following are the possible multiple topologies: SRM as a solution cannot work on its own. This is because it is only an orchestration tool and does not include a replication engine. However, it can leverage either a supported array-based replication or VMware's proprietary replication engine vSphere Replication. Array manager Each SRM instance needs to be configured with an array manager for it to communicate with the storage array. The Array Manager will detect the storage array using the information you supply to connect to the array. Before even you could add an array manager, you will need to install an array specific Storage Replication Adapter (SRA). This is because, the Array Manager uses the SRA installed to collect the replication information from the array: Storage Replication Adapter (SRA) The SRA is a storage vendor component that makes SRM aware of the replication configuration at the array. SRM leverages the SRA's ability to gather information regarding the replicated volumes and direction of the replication from the array. SRM also uses the SRA for the following functions: Test Failover Recovery Reprotect It is important to understand that SRM requires the SRA to be installed for all of its functions leveraging array-based replication. When all these components are put together, a site protected by SRM would look as depicted in the following figure: SRM conceptually assumes that both the protected and the recovery sites are geographically separated, but such a separation is not mandatory. You can use SRM to protect a chassis of servers and have another chassis in the same data center as the recovery site. Summary In this article, we learned what VMware vCenter Site Recovery Manager is and also its architecture. Resources for Article: Further resources on this subject: Virtualization [article] VM, It Is Not What You Think! [article] The importance of Hyper-V Security [article]
Read more
  • 0
  • 0
  • 2194

article-image-all-about-protocol
Packt
22 Nov 2016
19 min read
Save for later

All About the Protocol

Packt
22 Nov 2016
19 min read
In this article, Jon Hoffman, the author of the book Swift 3 Protocol Oriented Programming - Second Edition, talks about coming from an object-oriented background, I am very familiar with protocols (or interfaces, as they are known in other object-oriented languages). However, prior to Apple introducing protocol-oriented programming, protocols, or interfaces, were rarely the focal point of my application designs, unless I was working with an Open Service Gateway Initiative (OSGi) based project. When I designed an application in an object-oriented way, I always began the design with the objects. The protocols or interfaces were then used where they were appropriate, mainly for polymorphism when a class hierarchy did not make sense. Now, all that has changed, and with protocol-oriented programming, the protocol has been elevated to the focal point of our application design. (For more resources related to this topic, see here.) In this article you will learn the following: How to define property and method requirements within a protocol How to use protocol inheritance and composition How to use a protocol as a type What polymorphism is When we are designing an application in an object-oriented way, we begin the design by focusing on the objects and how they interact. The object is a data structure that contains information about the attributes of the object in the form of properties, and the actions performed by or to the object in the form of methods. We cannot create an object without a blueprint that tells the application what attributes and actions to expect from the object. In most object-oriented languages, this blueprint comes in the form of a class. A class is a construct that allows us to encapsulate the properties and actions of an object into a single type. Most object-oriented programming languages contain an interface type. This interface is a type that contains method and property signatures, but does not contain any implementation details. An interface can be considered a contract where any type that conforms to the interface must implement the required functionality defined within it. Interfaces in most object-oriented languages are primarily used as a way to achieve polymorphism. There are some frameworks, such as OSGi, that use interfaces extensively; however, in most object-oriented designs, the interface takes a back seat to the class and class hierarchy. Designing an application in a protocol-oriented way is significantly different from designing it in an object-oriented way. As we stated earlier, object-oriented design begins with the objects and the interaction between the objects, while protocol-oriented design begins with the protocol. While protocol-oriented design is about so much more than just the protocol, we can think of the protocol as the backbone of protocol-oriented programming. After all, it would be pretty hard to have protocol-oriented programming without the protocol. A protocol in Swift is similar to interfaces in object-oriented languages, where the protocol acts as a contract that defines the methods, properties, and other requirements needed by our types to perform their tasks. We say that the protocol acts as a contract because any type that adopts, or conforms, to the protocol promises to implement the requirements defined by the protocol. Any class, structure, or enumeration can conform to a protocol. A type cannot conform to a protocol unless it implements all required functionality defined within the protocol. If a type adopts a protocol and it does not implement all functionality defined by the protocol, we will get a compile time error and the project will not compile. Most modern object-oriented programming languages implement their standard library with a class hierarchy; however, the basis of Swift's standard library is the protocol (https://developer.apple.com/library/prerelease/ios/documentation/General/Reference/SwiftStandardLibraryReference/index.html). Therefore, not only does Apple recommend that we use the protocol-oriented programming paradigm in our applications, but they also use it in the Swift standard library. With the protocol being the basis of the Swift standard library and also the backbone of the protocol-oriented programming paradigm, it is very important that we fully understand what the protocol is and how we can use it. In this article, we will go over the basic usage of the protocol, which will include the syntax for defining the protocol, how to define requirements in a protocol, and how to make our types conform to a given protocol. Protocol syntax In this section, we will look at how to define a protocol, define requirements within a protocol, and specify that a type conforms to a protocol. Defining a protocol The syntax we use to define a protocol is very similar to the syntax used to define a class, structure, or enumeration. The following example shows the syntax used to define a protocol: protocol MyProtocol { //protocol definition here } To define the protocol, we use the protocol keyword followed by the name of the protocol. We then put the requirements, which our protocol defines, between curly brackets. Custom types can state that they conform to a particular protocol by placing the name of the protocol after the type's name, separated by a colon. The following example shows how we would state that the MyStruct structure conforms to the MyProtocol protocol: struct MyStruct: MyProtocol { //structure implementation here } A type can also conform to multiple protocols. We list the multiple protocols that the type conforms to by separating them with commas. The following example shows how we would specify that the MyStruct structure type conforms to the MyProtocol, AnotherProtocol, and ThirdProtocol protocols: struct MyStruct: MyProtocol, AnotherProtocol, ThirdProtocol { // Structure implementation here } Having a type conform to multiple protocols is a very important concept within protocol-oriented programming, as we will see later in the article. This concept is known as protocol composition. Now let's see how we would add property requirements to our protocol. Property requirements A protocol can require that the conforming types provide certain properties with specified names and types. The protocol does not say whether the property should be a stored or computed property because the implementation details are left up to the conforming types. When defining a property within a protocol, we must specify whether the property is a read-only or a read-write property by using the get and set keywords. We also need to specify the property's type since we cannot use the type inference in a protocol. Let's look at how we would define properties within a protocol by creating a protocol named FullName, as shown in the next example: protocol FullName { var firstName: String {get set} var lastName: String {get set} } In the FullName protocol, we define two properties named firstName and lastName. Both of these properties are defined as read-write properties. Any type that conforms to the FullName protocol must implement these properties. If we wanted to define the property as read-only, we would define it using only the get keyword, as shown in the following code: var readOnly: String {get} If the property is going to be a type property, then we must define it in the protocol. A type property is defined using the static keyword, as shown in the following example: static var typeProperty: String {get} Now let's see how we would add method requirements to our protocol. Method requirements A protocol can require that the conforming types provide specific methods. These methods are defined within the protocol exactly as we define them within a class or structure, but without the curly brackets and method body. We can define these methods as instance or type methods using the static keyword. Adding default values to the method's parameters is not allowed when defining the method withina protocol. Let's add a method named getFullName() to our FullName protocol: protocol FullName { var firstName: String {get set} var lastName: String {get set} func getFullName() -> String } Our fullName protocol now requires one method named getFullName() and two read-write properties named firstName and lastName. For value types, such as the structure, if we intend for a method to modify the instances that it belongs to, we must prefix the method definition with the mutating keyword. This keyword indicates that the method is allowed to modify the instance it belongs to. The following example shows how to use the mutating keyword with a method definition: mutating func changeName() If we mark a method requirement as mutating, we do not need to write the mutating keyword for that method when we adopt the protocol with a reference (class) type. The mutating keyword is only used with value (structures or enumerations) types. Optional requirements There are times when we want protocols to define optional requirements—that is, methods or properties that are not required to be implemented. To use optional requirements, we need to start off by marking the protocol with the @objc attribute. It is important to note that only classes can adopt protocols that use the @objc attribute. Structures and enumerations cannot adopt these protocols. To mark a property or method as optional, we use the optional keyword. Let's look at how we would use the optional keyword to define optional properties and methods: @objc protocol Phone { var phoneNumber: String {get set} @objc optional var emailAddress: String {get set} func dialNumber() @objc optional func getEmail() } In the Phone protocol we just created, we define a required property named phoneNumber and an optional property named emailAddress. We also defined a required function named dialNumber() and an optional function named getEmail(). Now let's explore how protocol inheritance works. Protocol inheritance Protocols can inherit requirements from one or more other protocols and then add additional requirements. The following code shows the syntax for protocol inheritance: protocol ProtocolThree: ProtocolOne, ProtocolTwo { // Add requirements here } The syntax for protocol inheritance is very similar to class inheritance in Swift, except that we are able to inherit from more than one protocol. Let's see how protocol inheritance works. We will use the FullName protocol that we defined earlier in this section and create a new protocol named Person: protocol Person: FullName { var age: Int {get set} } Now, when we create a type that conforms to the Person protocol, we must implement the requirements defined in the Person protocol, as well as the requirements defined in the FullName protocol. As an example, we could define a Student structure that conforms to the Person protocol as shown in the following code: struct Student: Person { var firstName = "" var lastName = "" var age = 0 func getFullName() -> String { return "(firstName) (lastName)" } } Note that in the Student structure we implemented the requirements defined in both the FullName and Person protocols; however, the only protocol specified when we defined the Student structure was the Person protocol. We only needed to list the Person protocol because it inherited all of the requirements from the FullName protocol. Now let's look at a very important concept in the protocol-oriented programming paradigm: protocol composition. Protocol composition Protocol composition lets our types adopt multiple protocols. This is a major advantage that we get when we use protocols rather than a class hierarchy because classes, in Swift and other single-inheritance languages, can only inherit from one superclass. The syntax for protocol composition is the same as the protocol inheritance that we just saw. The following example shows how to do protocol composition: struct MyStruct: ProtocolOne, ProtocolTwo, Protocolthree { // implementation here } Protocol composition allows us to break our requirements into many smaller components rather than inheriting all requirements from a single superclass or class hierarchy. This allows our type families to grow in width rather than height, which means we avoid creating bloated types that contain requirements that are not needed. Protocol composition may seem like a very simple concept, but it is a concept that is essential to protocol-oriented programming. Let's look at an example of protocol composition so we can see the advantage we get from using it. Let's say that we have the class hierarchy shown in the following diagram: In this class hierarchy, we have a base class named Athlete. The Athlete base class then has two subclasses named Amateur and Pro. These classes are used depending on whether the athlete is an amateur athlete or a pro athlete. An amateur athlete may be a colligate athlete, and we would need to store information such as which school they go to and their GPA. A pro athlete is one that gets paid for playing the game. For the pro athletes, we would need to store information such as what team they play for and their salary. In this example, things get a little messy under the Amateur and Pro classes. As we can see, we have a separate football player class under both the Amateur and Pro classes (the AmFootballPlayer and ProFootballPlayer classes). We also have a separate baseball class under both the Amateur and Pro classes (the AmBaseballPlayer and ProBaseballPlayer classes). This will require us to have a lot of duplicate code between theses classes. With protocol composition, instead of having a class hierarchy where our subclasses inherit all functionality from a single superclass, we have a collection of protocols that we can mix and match in our types. We then use one or more of these protocols as needed for our types. For example, we can create an AmFootballPlayer structure that conforms to the Athlete, Amateur, and Footballplayer protocols. We could also create the ProFootballPlayer structure that conforms to the Athlete, Pro, and Footballplayer protocols. This allows us to be very specific about the requirements for our types and only adopt the requirements that we need. From a pure protocol point of view, this last example may not make a lot of sense right now because protocols only define the requirements. One word of warning: If you find yourself creating numerous protocols that only contain one or two requirements in them, then you are probably making your protocols too granular. This will lead to a design that is hard to maintain and manage. Now let's look at how a protocol is a full-fledged type in Swift. Using protocols as a type Even though no functionality is implemented in a protocol, they are still considered a full-fledged type in the Swift programming language, and can mostly be used like any other type. What this means is that we can use protocols as parameters or return types for a function. We can also use them as the type for variables, constants, and collections. Let's take a look at some examples. For these next few examples, we will use the following PersonProtocol protocol: protocol PersonProtocol { var firstName: String {get set} var lastName: String {get set} var birthDate: Date {get set} var profession: String {get} init (firstName: String, lastName: String, birthDate: Date) } In this PersonProtocol, we define four properties and one initializer. For this first example, we will show how to use a protocol as a parameter and return type for a function, method, or initializer. Within the function itself, we also use the PersonProtocol as the type for a variable: func updatePerson(person: PersonProtocol) -> PersonProtocol { var newPerson: PersonProtocol // Code to update person goes here return newPerson } We can also use protocols as the type to store in a collection, as shown in the next example: var personArray = [PersonProtocol]() var personDict = [String: PersonProtocol]() The one thing we cannot do with protocols is create an instance of one. This is because no functionality is implemented within a protocol. As an example, if we tried to create an instance of the PersonProtocol protocol, as shown in the following example, we would receive the error error: protocol type 'PersonProtocol' cannot be instantiated: var test = PersonProtocol(firstName: "Jon", lastName: "Hoffman", ?birthDate: bDateProgrammer) We can use the instance of any type that conforms to our protocol anywhere that the protocol type is required. As an example, if we define a variable to be of the PersonProtocol protocol type, we can then populate that variable with the instance of any type that conforms to the PersonProtocol protocol. Let's assume that we have two types named SwiftProgrammer and FootballPlayer that conform to the PersonProtocol protocol. We can then use them as shown in this next example: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", ?birthDate: bDatePlayer) In this example, the myPerson variable is defined to be of the PersonProtocol protocol type. We can then set this variable to instances of either of the SwiftProgrammer and FootballPlayer types. One thing to note is that Swift does not care if the instance is a class, structure, or enumeration. It only matters that the type conforms to the PersonProtocol protocol type. As we saw earlier, we can use our PersonProtocol protocol as the type for an array, which means that we can populate the array with instances of any type that conforms to the PersonProtocol protocol. The following is an example of this (note that the bDateProgrammer and bDatePlayer variables are instances of the Date type that would represent the birthdate of the individual): var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) What we are seeing in these last couple of examples is a form of polymorphism. To use protocols to their fullest potential, we need to understand what polymorphism is. Polymorphism with protocols The word polymorphism comes from the Greek roots poly (meaning many) and morphe (meaning form). In programming languages, polymorphism is a single interface to multiple types (many forms). There are two reasons to learn the meaning of the word polymorphism. The first reason is that using such a fancy word can make you sound very intelligent in casual conversion. The second reason is that polymorphism provides one of the most useful programming techniques not only in object-oriented programming, but also protocol-oriented programming. Polymorphism lets us interact with multiple types though a single uniform interface. In the object-oriented programming world the single uniform interface usually comes from a superclass, while in the protocol-oriented programming world that single interface usually comes from a protocol. In the last section, we saw two examples of polymorphism with Swift. The first example was the following code: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) In this example, we had a single variable of the PersonProtocol type. Polymorphism allowed us to set the variable to instances of any type that conforms to the PersonProtocol protocol, such as the SwiftProgrammer or FootballPlayer types. The other example of polymorphism was in the following code: var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) In this example, we created an array of PersonProtocol types. Polymorphism allowed us to add instances of any types that conform to PersonProtocol to this array. When we access an instance of a type though a single uniform interface, as we just showed, we are unable to access type-specific functionality. As an example, if we had a property in the FootballPlayer type that records the age of the player, we would be unable to access that property because it is not defined in the PeopleProtocol protocol. If we do need to access type-specific functionality, we can use type casting. Type casting with protocols Type casting is a way to check the type of an instance and/or to treat the instance as a specified type. In Swift, we use the is keyword to check whether an instance is of a specific type and the as keyword to treat an instance as a specific type. The following example shows how we would use the is keyword: if person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } In this example, the conditional statement returns true if the person instance is of the SwiftProgrammer type or false if it isn't. We can also use the switch statement (as shown in the next example) if we want to check for multiple types: for person in people { switch (person) { case is SwiftProgrammer: print("(person.firstName) is a Swift Programmer") case is FootballPlayer: print("(person.firstName) is a Football Player") default: print("(person.firstName) is an unknown type") } } We can use the where statement in combination with the is keyword to filter an array to only return instances of a specific type. In the next example, we filter an array that contains instances of the PersonProtocol to only return those elements of the array that are instances of the SwiftProgrammer type: for person in people where person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Now let's look at how we would cast an instance to a specific type. To do this, we can use the as keyword. Since the cast can fail if the instance is not of the specified type, the as keyword comes in two forms: as? and as!. With the as? form, if the casting fails it returns a nil; with the as! form, if the casting fails we get a runtime error. Therefore, it is recommended to use the as? form unless we are absolutely sure of the instance type or we perform a check of the instance type prior to doing the cast. The following example shows how we would use the as? keyword to attempt to cast an instance of a variable to the SwiftProgammer type: if let p = person as? SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Since the as? keyword returns an optional, in the last example we could use optional binding to perform the cast. If we are sure of the instance type, we can use the as! keyword as shown in the next example: for person in people where person is SwiftProgrammer { let p = person as! SwiftProgrammer } Summary While protocol-oriented programming is about so much more than just the protocol, it would be impossible to have the protocol-oriented programming paradigm without the protocol. We can think of the protocol as the backbone of protocol-oriented programming. Therefore, it is important to fully understand the protocol in order to properly implement protocol-oriented programming. Resources for Article: Further resources on this subject: Using Protocols and Protocol Extensions [Article] What we can learn from attacks on the WEP Protocol [Article] Hosting the service in IIS using the TCP protocol [Article]
Read more
  • 0
  • 0
  • 1635

article-image-intro-canvas-animations
Dylan Frankland
21 Nov 2016
11 min read
Save for later

Intro to Canvas Animations

Dylan Frankland
21 Nov 2016
11 min read
I think that most developers who have not had much experience with Canvas will generally believe that it is unnecessary, non-performant, or difficult to work with. Canvas's flexibility lends itself to be experimented with and compared against other solutions in the browser. You find comparison articles all over the web comparing things like animations, filter effects, GPU rendering, you name it with Canvas. The way that I look at it is that Canvas is best suited to creating custom shapes and effects that hook into events or actions in the DOM. Doing anything else should be left up to CSS considering it can handle most animations, transitions, and normal static shapes. Mastering Canvas opens a world of possibilities to create anything such as video games, smartwatch interfaces, and even more complex custom 3D graphics. The first step is understanding canvas in 2D and animating some elements before getting into more complex logic. A simple idea that might demonstrate the abilities of Canvas is to create a Cortana- or HAL-like interface that changes shape according to audio. First we will create the interface, a simple glowing circle, then we will practice animating it. Before getting started, I assume you will be using a newer version of Chrome. If you are using a non-WebKit browser, some of the APIs used may be slightly different. To get started, let us create a simple index.html page with the following markup: <!doctype html> <html> <head> <title>Demo</title> <style> body { padding: 0; margin: 0; background-color: #000; } </style> </head> <body> <canvas></canvas> <script></script> </body> </html> Obviously, this is nothing special yet, it just holds the simple HTML for us to begin coding our JavaScript. Next we will start to fill in the <script> block below the <canvas> element inside of the body. The first part of your script will have to get the reference to the <canvas> element, then set its size; for this experiment, let us just set the size to the entire window: (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; })(); Again, refreshing the page will not show much. This time let's render a small circle to begin with: (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, 50, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.fillStyle = '#8ED6FF'; // Set the color of the circle, make it a cool AI blue context.fill(); // Actually fill the shape in and close it })(); Alright! Now we have something that we can really start to work with. If you refresh that HTML page you should now be staring at a blue dot in the center of your screen. I do not know about you, but blue dots are sort of boring, nothing that interesting. Let's switch it up and create a more movie-esque AI-looking circle. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, 50, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = 60; // Set the amount of "glow" context.stroke(); // Draw the shape and close it })(); That is looking way better now. Next, let's start to animate this without hooking it up to anything. The standard way to do this is to create a function that alters the canvas each time that the DOM is able to render itself. We do that by creating a function which passes a reference to itself to window.requestAnimationFrame. requestAnimationFrame takes any function and executes it once the window is focused and has processing power to render another frame. This is a little confusing, but it is the best way to create smooth animations. Check out the example below: (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; const drawCircle = () => { context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, 50, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = 60; // Set the amount of "glow" context.stroke(); // Draw the shape and close it window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); This will not create any type of animation yet, but if you put a console.log inside the drawCircle function you would see a bunch of logs in the console, probably close to 60 times per second. The next step is to add some state to the this function and make it change size. We can do that by creating a size variable with an integer that we change up and down, plus another boolean variable called grow to keep track of the direction that we will change size. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); Refreshing the page and seeing nothing happen might be a little disheartening, but do not worry you are not the only one! Canvases require being cleared before being drawn upon again. Now we have created a way to change and update the canvas, but we need to introduce a way to clear it. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.clearRect(0, 0, canvas.width, canvas.height); // Clear the contents of the canvas starting from [0, 0] all the way to the [totalWidth, totalHeight] context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); Now things may seems even more broken. Remember that context.translate function? That is setting the origin of all the commands to the middle of the canvas. To prevent that from messing up our canvas when we try to clear it, we will need to save the state of the canvas and restore before hand. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.restore(); // Restore previous canvas state, does nothing if it wasn't saved before context.save(); // Saves it until the next time `drawCircle` is called context.clearRect(0, 0, canvas.width, canvas.height); // Clear the contents of the canvas starting from [0, 0] all the way to the [totalWidth, totalHeight] context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); Oh snap! Now you have got some seriously cool canvas animation going on. The final code should look like this: <!doctype html> <html> <head> <title>Demo</title> <style> body { padding: 0; margin: 0; background-color: #000; } </style> </head> <body> <canvas></canvas> <script> (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.restore(); context.save(); context.clearRect(0, 0, canvas.width, canvas.height); // Clear the contents of the canvas starting from [0, 0] all the way to the [totalWidth, totalHeight] context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); </script> </body> </html> Hopefully, this should give you a great idea on how to get started and create some really cool animations. The next logical step to this example is to hook it up to an audio so that it can react to voice like your own personal AI. Author Dylan Frankland is a frontend engineer at Narvar. He is an agile web developer with over 4 years of experience in developing and designing for start-ups and medium-sized businesses to create functional, fast, and beautiful web experiences.
Read more
  • 0
  • 0
  • 1933
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-create-dynamic-tilemaps-duality-c-part-ii
Lőrinc Serfőző
21 Nov 2016
7 min read
Save for later

Create Dynamic Tilemaps in Duality (C#) – Part II

Lőrinc Serfőző
21 Nov 2016
7 min read
Introduction In the first part of this tutorial, we learned how static tilemaps are created in the Duality game engine. Let's now take it to a higher level. In this article, the process of implementing a custom component is described. The new component modifies the tilemap at runtime, controlled by mouse clicks. Although it is a simple example, the method enables more advanced uses, such as procedural generation of game levels or destructible terrain. The repository containing the finished project is available on GitHub. Creating Custom Components It has been already mentioned that all functionality in Duality is implemented via plugins. To create a custom component, you have to compile a new Core Plugin. Although it might seem convoluted at first, Duality does most of the work and setup, so let's dive in! First, open the Visual Studio Solution by clicking on the 'Open Sourcecode' icon button, which is the second one on the toolbar. Alternatively, just open the ProjectPlugins.sln file located in '{Duality Project Dir}'. Once the IDE is open, inspect the sole loaded project called 'CorePlugin'. It has two source files (not counting AssemblyInfo.cs), and references to the Duality assembly. CorePlugin.cs contains a class inherited from CorePlugin. It's necessary to have this class in the solution to identify the assembly as a plugin, but usually it does not need to be modified, because game logic is implemented via custom components in most of the time. Let's have a look at the other class located in 'YourCustomComponentType.cs': using System; using System.Collections.Generic; using System.Linq; using Duality; namespace Duality_ { public class YourCustomComponentType : Component { } } There are a few important things to notice here. The custom component must be a subclass of Component. It has to be declared public; otherwise the new component wouldn't appear in the editor. Don't modify the code for now, but hit F7 to compile the assembly. Behind the scenes, the output assembly named 'GamePlugin.core.dll' (along with the debug symbol file and the xml documentation) is copied to '{Duality Project Dir}', and Dualitor loads them. The new component is available for being added to the game, like any other component: Adding the custom component At this point, the new component could be added to GameObjects, but it would not do anything yet. The next sections are about how to implement the game logic in the component. Laying the structure of ChangeTilesetCmp Adding a reference of the Tilemap Plugin to the VS project To access tilemap-related classes and information in the component, the Visual Studio project must have a reference to the assembly containing the Tilemap Plugin. Bring up the context menu on the 'CorePlugin' project's 'References' item in the Solution Explorer, and select the 'Add Reference' item. A dialog should appear; browse 'Tilemaps.core.dll' from the '{Duality Project Dir}'. Adding the Tilemap Plugin as a reference Defining the internal structure of ChangeTilemapCmp Rename the YourCustomComponentType class to ChangeTilemapCmp, and do the same with the container .cs file to stay consistent. First describe the function signatures used in the component: using Duality; using Duality.Components; using Duality.Editor; using Duality.Input; using Duality.Plugins.Tilemaps; namespace Duality_ [EditorHintCategory ("Tilemaps Tutorial")] // [1] public class ChangeTilemapCmp : Component, ICmpUpdatable { // [2] private Tilemap TilemapInScene { get; } private TilemapRenderer TilemapRendererInScene { get; } private Camera MainCamera { get; } void ICmpUpdatable.OnUpdate () // [3] { } private Vector2 GetWorldCoordOfMouse () // [4] { } private void ChangeTilemap (Vector2 worldPos) // [5] { } } } The EditorHintCategory attribute describes in which folder the component should appear when adding it. Several get-only properties are used to access specific components in the scene. See their implementation below. The OnUpdate function implements the ICmpUpdatable interface. It's called upon every update of the game loop. GetWorldCoordOfMouse returns the current position of the mouse transformed into the game world coordinates. ChangeTilemap, as its name suggests, changes the tilemap at a specified world location. It should not do anything if the location is not on the tilemap. After designing our little class, let's implement the details! Implementing the game logic in ChangeTilemapCmp Implementing the get-only properties The Tilemap and TilemapRenderer components are obviously needed by our logic. Since there is only one instance of them in the scene, it's easy to find them by type: private Tilemap TilemapInScene => this.GameObj.ParentScene.FindComponent<Tilemap> (); private TilemapRenderer TilemapRendererInScene => this.GameObj.ParentScene.FindComponent<TilemapRenderer>(); private Camera MainCamera => this.GameObj.ParentScene.FindComponent<Camera> (); Implementing the OnUpdate method This method is called for every frame, usually 60 times a second. We check if the left mouse button was pressed in that frame, and if yes, act accordingly: void ICmpUpdatable.OnUpdate () { if (DualityApp.Mouse.ButtonHit (MouseButton.Left)) ChangeTilemap (GetWorldCoordOfMouse ()); } Implementing the GetWorldCoordOfMouse method Here a simple transformation is needed. The DualityApp.Mouse.Pos property returns the mouse position on the screen. After a null-check, get the mouse position on the screen; then convert it to world position using the Camera's GetSpaceCoord method. private Vector2 GetWorldCoordOfMouse () { if (MainCamera == null) return Vector2.Zero; Vector2 mouseScreenPos = DualityApp.Mouse.Pos; return MainCamera.GetSpaceCoord (mouseScreenPos).Xy; } Implementing the ChangeTilemap method The main logic of ChangeTilemapCmp is implemented in this method: private void ChangeTilemap (Vector2 worldPos) { // [1] Tilemap tilemap = TilemapInScene; TilemapRenderer tilemapRenderer = TilemapRendererInScene; if (tilemap == null || tilemapRenderer == null) { Log.Game.WriteError("There are no tilemaps in the current scene!"); return; } // [2] Vector2 localPos = worldPos - tilemapRenderer.GameObj.Transform.Pos.Xy; // [3] Point2 tilePos = tilemapRenderer.GetTileAtLocalPos (localPos, TilePickMode.Reject); if (tilePos.X < 0 || tilePos.Y < 0) return; // [4] Tile clickedTile = tilemap.Tiles[tilePos.X, tilePos.Y]; int newTileIndex = clickedTile.BaseIndex == 0 ? 1 : 0; clickedTile.BaseIndex = newTileIndex; tilemap.SetTile(tilePos.X, tilePos.Y, clickedTile); } First check if there is any Tilemap and TilemapRenderer in the Scene. Transform the world position to the tilemap's local coordinate system by subtracting the tilemap's position from it. Acquire the indexed location of the clicked tile by the GetTileAtLocalPos method of the renderer. It returns a Point2, an integer-based vector, which represents the row and column of the tile in the tilemap. Because TilePickMode.Reject is passed as the second argument, it returns {-1, -1} when the 'player' clicks next to the tilemap. We should check that too, and return if that is the case. Get the Tile struct at the clicked position, and flip its tile index. Index 0 means the green tile, while index 1 refers to the red one. Afterwards, the tile has to be assigned back to the Tilemap, because in C#, structs are copied by value, not by reference. After finishing this, do not forget to compile the source again. Adding and testing ChangeTilemapCmp Create a new GameObject with ChangeTilemapCmp on it in the Scene View: Adding ChangeTilemapCmp Then save the scene with the floppy icon of the Scene View, and start the game by clicking the 'Run Game' icon button located on the toolbar. Test the game by clicking on the tilemap; it should change color on the tile you click. Running the game Summary Thank you for following along with this guide! Hopefully it helped you with the Duality game engine and its Tilemap Plugin. Remember that the above was a very simple example, but more sophisticated logic is achievable using these tools. If you want some inspiration, have a look at the entries of the Duality Tilemaps Jam, which took place in September, 2016. Author Lőrinc Serfőző is a software engineer at Graphisoft, the company behind the BIM solution ArchiCAD. He is studying mechatronics engineering at the Budapest University of Technology and Economics, an interdisciplinary field between the more traditional mechanical engineering, electrical engineering and informatics, and has quickly grown a passion toward software development. He is a supporter of open source and contributes to the C# and OpenGL-based Duality game engine, creating free plugins and tools for users.
Read more
  • 0
  • 0
  • 2275

article-image-introducing-algorithm-design-paradigms
Packt
18 Nov 2016
10 min read
Save for later

Introducing Algorithm Design Paradigms

Packt
18 Nov 2016
10 min read
In this article by David Julian and Benjamin Baka, author of the book Python Data Structures and Algorithm, we will discern three broad approaches to algorithm design. They are as follows: Divide and conquer Greedy algorithms Dynamic programming   (For more resources related to this topic, see here.) As the name suggests, the divide and conquer paradigm involves breaking a problem into smaller subproblems, and then in some way combining the results to obtain a global solution. This is a very common and natural problem solving technique and is, arguably, the most used approach to algorithm design. Greedy algorithms often involve optimization and combinatorial problems; the classic example is applying it to the traveling salesperson problem, where a greedy approach always chooses the closest destination first. This shortest path strategy involves finding the best solution to a local problem in the hope that this will lead to a global solution. The dynamic programming approach is useful when our subproblems overlap. This is different from divide and conquer. Rather than breaking our problem into independent subproblems, with dynamic programming, intermediate results are cached and can be used in subsequent operations. Like divide and conquer, it uses recursion. However, dynamic programing allows us to compare results at different stages. This can have a performance advantage over divide and conquer for some problems because it is often quicker to retrieve a previously calculated result from memory rather than having to recalculate it. Recursion and backtracking Recursion is particularly useful for divide and conquer problems; however, it can be difficult to understand exactly what is happening, since each recursive call is itself spinning off other recursive calls. At the core of a recursive function are two types of cases. Base cases, which tell the recursion when to terminate and recursive cases that call the function they are in. A simple problem that naturally lends itself to a recursive solution is calculating factorials. The recursive factorial algorithm defines two cases—the base case, when n is zero, and the recursive case, when n is greater than zero. A typical implementation is shown in the following code: def factorial(n): #test for a base case if n==0: return 1 # make a calculation and a recursive call f= n*factorial(n-1) print(f) return(f) factorial(4) This code prints out the digits 1, 2, 4, 24. To calculate 4!, we require four recursive calls plus the initial parent call. On each recursion, a copy of the methods variables is stored in memory. Once the method returns, it is removed from memory. Here is a way to visualize this process: It may not necessarily be clear if recursion or iteration is a better solution to a particular problem, after all, they both repeat a series of operations and both are very well suited to divide and conquer approaches to algorithm design. An iteration churns away until the problem is done. Recursion breaks the problem down into smaller chunks and then combines the results. Iteration is often easier for programmers because the control stays local to a loop, whereas recursion can more closely represent mathematical concepts such as factorials. Recursive calls are stored in memory, whereas iterations are not. This creates a tradeoff between processor cycles and memory usage, so choosing which one to use may depend on whether the task is processor or memory intensive. The following table outlines the key differences between recursion and iteration. Recursion Iteration Terminates when a base case is reached Terminates when a defined condition is met Each recursive call requires space in memory Each iteration is not stored in memory An infinite recursion results in a stack overflow error An infinite iteration will run while the hardware is powered Some problems are naturally better suited to recursive solutions Iterative solutions may not always be obvious Backtracking Backtracking is a form of recursion that is particularly useful for types of problems such as traversing tree structures where we are presented with a number of options at each node, from which we must choose one. Subsequently, we are presented with a different set of options, and depending on the series of choices made, either a goal state or a dead end is reached. If it is the latter, we mast backtrack to a previous node and traverse a different branch. Backtracking is a divide and conquer method for exhaustive search. Importantly, backtracking prunes branches that cannot give a result. An example of back tracking is given by the following. Here, we have used a recursive approach to generating all the possible permutations of a given string, s, of a given length n: def bitStr(n, s): if n == 1: return s return [ digit + bits for digit in bitStr(1,s)for bits in bitStr(n - 1,s)] print (bitStr(3,'abc')) This generates the following output: Note the double list compression and the two recursive calls within this comprehension. This recursively concatenates each element of the initial sequence, returned when n = 1, with each element of the string generated in the previous recursive call. In this sense, it is backtracking to uncover previously ungenerated combinations. The final string that is returned is all n letter combinations of the initial string. Divide and conquer – long multiplication For recursion to be more than just a clever trick, we need to understand how to compare it to other approaches, such as iteration, and to understand when it is use will lead to a faster algorithm. An iterative algorithm that we are all familiar with is the procedure you learned in primary math classes, which was used to multiply two large numbers, that is, long multiplication. If you remember, long multiplication involved iterative multiplying and carry operations followed by a shifting and addition operation. Our aim here is to examine ways to measure how efficient this procedure is and attempt to answer the question, is this the most efficient procedure we can use for multiplying two large numbers together? In the following figure, we can see that multiplying two 4-digit numbers together requires 16 multiplication operations, and we can generalize to say that an n digit number requires, approximately, n2 multiplication operations: This method of analyzing algorithms, in terms of number of computational primitives such as multiplication and addition, is important because it can give a way to understand the relationship between the time it takes to complete a certain computation and the size of the input to that computation. In particular, we want to know what happens when the input, the number of digits, n, is very large. Can we do better? A recursive approach It turns out that in the case of long multiplication, the answer is yes, there are in fact several algorithms for multiplying large numbers that require fewer operations. One of the most well-known alternatives to long multiplication is the Karatsuba algorithm, published in 1962. This takes a fundamentally different approach: rather than iteratively multiplying single digit numbers, it recursively carries out multiplication operation on progressively smaller inputs. Recursive programs call themselves on smaller subset of the input. The first step in building a recursive algorithm is to decompose a large number into several smaller numbers. The most natural way to do this is to simply split the number into halves: the first half comprising the most significant digits and the second half comprising the least significant digits. For example, our four-digit number, 2345, becomes a pair of two digit numbers, 23 and 45. We can write a more general decomposition of any two n-digit numbers x and y using the following, where m is any positive integer less than n. For x-digit number: For y-digit number: So, we can now rewrite our multiplication problem x and y as follows: When we expand and gather like terms we get the following: More conveniently, we can write it like this (equation 1): Here, It should be pointed out that this suggests a recursive approach to multiplying two numbers since this procedure itself involves multiplication. Specifically, the products ac, ad, bc, and bd all involve numbers smaller than the input number, and so it is conceivable that we could apply the same operation as a partial solution to the overall problem. This algorithm, so far consists of four recursive multiplication steps and it is not immediately clear if it will be faster than the classic long multiplication approach. What we have discussed so far in regards to the recursive approach to multiplication was well known to mathematicians since the late 19th century. The Karatsuba algorithm improves on this is by making the following observation. We really only need to know three quantities, z2 = ac, z1=ad +bc, and z0 = bd to solve equation 1. We need to know the values of a, b, c, and d as they contribute to the overall sum and products involved in calculating the quantities z2, z1, and z0. This suggests the possibility that perhaps we can reduce the number of recursive steps. It turns out that this is indeed the situation. Since the products ac and bd are already in their simplest form, it seems unlikely that we can eliminate these calculations. We can, however, make the following observation: When we subtract the quantities ac and bd, which we have calculated in the previous recursive step, we get the quantity we need, namely ad + bc: This shows that we can indeed compute the sum of ad and bc without separately computing each of the individual quantities. In summary, we can improve on equation 1 by reducing from four recursive steps to three. These three steps are as follows: Recursively calculate ac. Recursively calculate bd. Recursively calculate (a +b)(c + d) and subtract ac and bd. The following code shows a Python implementation of the Karatsuba algorithm: from math import log10 def karatsuba(x,y): # The base case for recursion if x < 10 or y < 10: return x*y #sets n, the number of digits in the highest input number n = max(int(log10(x)+1), int(log10(y)+1)) # rounds up n/2 n_2 = int(math.ceil(n / 2.0)) #adds 1 if n is uneven n = n if n % 2 == 0 else n + 1 #splits the input numbers a, b = divmod(x, 10**n_2) c, d = divmod(y, 10**n_2) #applies the three recursive steps ac = karatsuba(a,c) bd = karatsuba(b,d) ad_bc = karatsuba((a+b),(c+d)) - ac - bd #performs the multiplication return (((10**n)*ac) + bd + ((10**n_2)*(ad_bc))) To satisfy ourselves that this does indeed work, we can run the following test function: import random def test(): for i in range(1000): x = random.randint(1,10**5) y = random.randint(1,10**5) expected = x * y result = karatsuba(x, y) if result != expected: return("failed") return('ok') Summary In this article, we looked at a way to recursively multiply large numbers and also a recursive approach for merge sort. We saw how to use backtracking for exhaustive search and generating strings. Resources for Article: Further resources on this subject: Python Data Structures [article] How is Python code organized [article] Algorithm Analysis [article]
Read more
  • 0
  • 0
  • 24738

article-image-suggesters-improving-user-search-experience
Packt
18 Nov 2016
11 min read
Save for later

Suggesters for Improving User Search Experience

Packt
18 Nov 2016
11 min read
In this article by Bharvi Dixit, the author of the book Mastering ElasticSearch 5.0 - Third Edition, we will focus on the topics for improving the user search experience using suggesters, which allows you to correct user query spelling mistakes and build efficient autocomplete mechanisms. First, let's look on the query possibilities and the responses returned by Elasticsearch. We will try to show you the general principles, and then we will get into more details about each of the available suggesters. (For more resources related to this topic, see here.) Using the suggester under search Before Elasticsearch 5.0, there was a possibility to get suggestions for a given text by using a dedicated _suggest REST endpoint. But in Elasticsearch 5.0, this dedicated _suggest endpoint has been deprecated in favor of using suggest API. In this release, the suggest only search requests have been optimized for performance reasons and we can execute the suggetions _search endpoint. Similar to query object, we can use a suggest object and what we need to provide inside suggest object is the text to analyze and the type of used suggester (term or phrase). So if we would like to get suggestions for the words chrimes in wordl (note that we've misspelled the word on purpose), we would run the following query: curl -XPOST "http://localhost:9200/wikinews/_search?pretty" -d' { "suggest": { "first_suggestion": { "text": "chrimes in wordl", "term": { "field": "title" } } } }' The dedicated endpoint _suggest has been deprecated in Elasticsearch version 5.0 and might be removed in future releases, so be advised to use suggestion request under _search endpoint. All the examples covered in this article usage the same _search endpoint for suggest request. As you can see, the suggestion request wrapped inside suggest object and is send to Elasticsearch in its own object with the name we chose (in the preceding case, it is first_suggestion). Next, we specify the text for which we want the suggestion to be returned using the text parameter. Finally, we add the suggester object, which is either term or phrase. The suggester object contains its configuration, which for the term suggester used in the preceding command, is the field we want to use for suggestions (the field property). We can also send more than one suggestion at a time by adding multiple suggestion names. For example, if in addition to the preceding suggestion, we would also include a suggestion for the word arest, we would use the following command: curl -XPOST "http://localhost:9200/wikinews/_search?pretty" -d' { "suggest": { "first_suggestion": { "text": "chrimes in wordl", "term": { "field": "title" } }, "second_suggestion": { "text": "arest", "term": { "field": "text" } } } }' Understanding the suggester response Let's now look at the example response for the suggestion query we have executed. Although the response will differ for each suggester type, let's look at the response returned by Elasticsearch for the first command we've sent in the preceding code that used the term suggester: { "took" : 5, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "hits" : { "total" : 0, "max_score" : 0.0, "hits" : [ ] }, "suggest" : { "first_suggestion" : [ { "text" : "chrimes", "offset" : 0, "length" : 7, "options" : [ { "text" : "crimes", "score" : 0.8333333, "freq" : 36 }, { "text" : "choices", "score" : 0.71428573, "freq" : 2 }, { "text" : "chrome", "score" : 0.6666666, "freq" : 2 }, { "text" : "chimps", "score" : 0.6666666, "freq" : 1 }, { "text" : "crimea", "score" : 0.6666666, "freq" : 1 } ] }, { "text" : "in", "offset" : 8, "length" : 2, "options" : [ ] }, { "text" : "wordl", "offset" : 11, "length" : 5, "options" : [ { "text" : "world", "score" : 0.8, "freq" : 436 }, { "text" : "words", "score" : 0.8, "freq" : 6 }, { "text" : "word", "score" : 0.75, "freq" : 9 }, { "text" : "worth", "score" : 0.6, "freq" : 21 }, { "text" : "worst", "score" : 0.6, "freq" : 16 } ] } ] } } As you can see in the preceding response, the term suggester returns a list of possible suggestions for each term that was present in the text parameter of our first_suggestion section. For each term, the term suggester will return an array of possible suggestions with additional information. Looking at the data returned for the wordl term, we can see the original word (the text parameter), its offset in the original text parameter (the offset parameter), and its length (the length parameter). The options array contains suggestions for the given word and will be empty if Elasticsearch doesn't find any suggestions. Each entry in this array is a suggestion and is characterized by the following properties: text: This is the text of the suggestion. score: This is the suggestion score; the higher the score, the better the suggestion will be. freq: This is the frequency of the suggestion. The frequency represents how many times the word appears in documents in the index we are running the suggestion query against. The higher the frequency, the more documents will have the suggested word in its fields and the higher the chance that the suggestion is the one we are looking for. Please remember that the phrase suggester response will differ from the one returned by the terms suggester, The term suggester The term suggester works on the basis of the edit distance, which means that the suggestion with fewer characters that needs to be changed or removed to make the suggestion look like the original word is the best one. For example, let's take the words worl and work. In order to change the worl term to work, we need to change the l letter to k, so it means a distance of one. Of course, the text provided to the suggester is analyzed and then terms are chosen to be suggested. The phrase suggester The term suggester provides a great way to correct user spelling mistakes on a per-term basis. However, if we would like to get back phrases, it is not possible to do that when using this suggester. This is why the phrase suggester was introduced. It is built on top of the term suggester and adds additional phrase calculation logic to it so that whole phrases can be returned instead of individual terms. It uses N-gram based language models to calculate how good the suggestion is and will probably be a better choice to suggest whole phrases instead of the term suggester. The N-gram approach divides terms in the index into grams—word fragments built of one or more letters. For example, if we would like to divide the word mastering into bi-grams (a two letter N-gram), it would look like this: ma as st te er ri in ng. The completion suggester Till now we read about term suggester and phrase suggester which are used for providing suggestions but completion suggester is completely different and it is used for as a prefix-based suggester for allowing us to create the autocomplete (search as you type) functionality in a very performance-effective way because of storing complicated structures in the index instead of calculating them during query time. This suggester is not about correcting user spelling mistakes. In Elasticsearch 5.0, Completion suggester has gone through complete rewrite. Both the syntax and data structures of completion type field have been changed and so is the response structure. There are many new exciting features and speed optimizations have been introduced in the completion suggester. One of these features is making completion suggester near real time which allows deleted suggestions to omit from suggestion results as soon as they are deleted. The logic behind the completion suggester The prefix suggester is based on the data structure called Finite State Transducer (FST) ( For more information refer, http://en.wikipedia.org/wiki/Finite_state_transducer). Although it is highly efficient, it may require significant resources to build on systems with large amounts of data in them: systems that Elasticsearch is perfectly suitable for. If we would like to build such a structure on the nodes after each restart or cluster state change, we may lose performance. Because of this, the Elasticsearch creators decided to use an FST-like structure during index time and store it in the index so that it can be loaded into the memory when needed. Using the completion suggester To use a prefix-based suggester we need to properly index our data with a dedicated field type called completion. It stores the FST-like structure in the index. In order to illustrate how to use this suggester, let's assume that we want to create an autocomplete feature to allow us to show book authors, which we store in an additional index. In addition to author's names, we want to return the identifiers of the books they wrote in order to search for them with an additional query. We start with creating the authors index by running the following command: curl -XPUT "http://localhost:9200/authors" -d' { "mappings": { "author": { "properties": { "name": { "type": "keyword" }, "suggest": { "type": "completion" } } } } }' Our index will contain a single type called author. Each document will have two fields: the name field, which is the name of the author, and the suggest field, which is the field we will use for autocomplete. The suggest field is the one we are interested in; we've defined it using the completion type, which will result in storing the FST-like structure in the index. Implementing your own autocompletion Completion suggester has been designed to be a powerful and easily implemented solution for autocomplete but it supports only prefix query. Most of the time autocomplete need only work as a prefix query for example, If I type elastic then I expect elasticsearch as a suggestion, not nonelastic. There are some use cases, when one wants to implement more general partial word completion. Completion suggester fails to fulfill this requirement. The second limitation of completion suggester is, it does not allow advance queries and filters searched. To get rid of both these limitations we are going to implement a custom autocomplete feature based on N-gram, which works in almost all the scenarios. Creating index Lets create an index location-suggestion with following settings and mappings: curl -XPUT "http://localhost:9200/location-suggestion" -d' { "settings": { "index": { "analysis": { "filter": { "nGram_filter": { "token_chars": [ "letter", "digit", "punctuation", "symbol", "whitespace" ], "min_gram": "2", "type": "nGram", "max_gram": "20" } }, "analyzer": { "nGram_analyzer": { "filter": [ "lowercase", "asciifolding", "nGram_filter" ], "type": "custom", "tokenizer": "whitespace" }, "whitespace_analyzer": { "filter": [ "lowercase", "asciifolding" ], "type": "custom", "tokenizer": "whitespace" } } } } }, "mappings": { "locations": { "properties": { "name": { "type": "text", "analyzer": "nGram_analyzer", "search_analyzer": "whitespace_analyzer" }, "country": { "type": "keyword" } } } } }' Understanding the parameters If you look carefully in preceding curl request for creating the index, it contains both settings and the mappings. We will see them now in detail one by one. Configuring settings Our settings contains two custom analyzers: nGram_analyzer and whitespace_analyzer. We have made custom whitespace_analyzer using whitespace tokenizer just for making due that all the tokens are indexed in lowercase and ascifolded form. Our main interest is nGram_analyzer, which contains a custom filter nGram_filter consisting following parameters: type: Specifies type of token filters which is nGram in our case. token_chars: Specifies what kind of characters are allowed in the generated tokens. Punctuations and special characters are generally removed from the token streams but in our example, we have intended to keep them. We have kept whitespace also so that a text which contains United States and a user searches for u s, United States still appears in the suggestion. min_gram and max_gram: These two attributes set the minimum and maximum length of substrings that will generated and added to the lookup table. For example, according to our settings for the index, the token India will generate following tokens: [ "di", "dia", "ia", "in", "ind", "indi", "india", "nd", "ndi", "ndia" ] Configuring mappings The document type of our index is locations and it has two fields, name and country. The most important thing to see is the way analyzers has been defined for name field which will be used for autosuggestion. For this field we have set index analyzer to our custom nGram_analyzer where the search analyzer is set to whitespace_analyzer. The index_analyzer parameter is no more supported from Elasticsearch version 5.0 onward. Also, if you want to configure search_analyzer property for a field, then you must configure analyzer property too the way we have shown in the preceding example. Summary In this article we focused on improving user search experience. We started with term and phrase suggesters and then covered search as you type that is, autocompletion feature which is implemented using completion suggester. We also saw the limitations of completion suggester in handling advanced queries and partial matching which further solved by implementing our custom completion using N-gram. Resources for Article: Further resources on this subject: Searching Your Data [article] Understanding Mesos Internals [article] Big Data Analysis (R and Hadoop) [article]
Read more
  • 0
  • 0
  • 2764

article-image-create-dynamic-tilemaps-duality-c-part-i
Lőrinc Serfőző
18 Nov 2016
6 min read
Save for later

Create Dynamic Tilemaps in Duality (C#) – Part I

Lőrinc Serfőző
18 Nov 2016
6 min read
Introduction This article guides you through the process of creating tilemaps and changing them at runtime in the Duality game engine. A Few Words about the Duality Game Engine Duality is an open source, Windows-only 2D game engine based on OpenGL and written entirely in C#. Despite being a non-commercial project, it is a rather mature product because it has been around since 2011. It has also proved to be capable of delivering professional games. Obviously, the feature bucket is somewhat limited compared to the major proprietary engines. On the other hand, this fact makes Duality an excellent learning tool, because the user often needs to implement logic on a lower level and also has the opportunity to inspect the source code of the engine. Thus I would recommend this engine to game developers with an intermediate skill level, who'd like to have a take on open source and are not afraid of learning by doing. This article does not describe the inner workings of Duality in detail because these are well-explained on the GitHub wiki of the project. If you are not familiar with the engine, please consider reviewing some of the key concepts described there. The repository containing the finished project is available on GitHub. Required tools First things first: Duality editor only supports Windows at the moment. You will need a C# compiler and a text editor. Visual Studio 2013 or higher is recommended, but Xamarin Studio/MonoDevelop also works. Duality itself is written in C# 4, but this tutorial uses language features introduced in C# 6. Creating a new project and initializing the Tilemap plugin Downloading the Duality binaries The latest binary package of the engine is available on its main site. Unlike most game engines, it does not need a setup process. In fact, every single project is a self-contained application, including the editor. This might seem strange at first, but this architecture makes version control, migration, and having different engine versions for different projects very easy. On top of this, it is also a self-building application. When you unzip the downloaded file, it contains only one executable: DualityEditor.exe. At its first run, it downloads the required packages from the main repository. It is based on the industry-standard NuGet format, and the packages are actually stored on NuGet.org. After accepting the license agreement and proceeding through the automated package install, the following screen should show up: Dualitor The usage of the editor application named Dualitor is intuitive; however, if you are not familiar with it yet, the quick start guide might be worth checking out. Adding the Tilemaps plugin to the project Duality has a modular architecture. Thus, every piece of distinct functionality is organized in modules, or, using the official terminology, plugins. Practically, every plugin is a .NET assembly that Duality loads and uses. There are two types of plugins: core plugins provide in-game functionality and services and are distributed with the game. On the other hand, editor plugins are used by Dualitor, and usually provide some sort of editing or content management functionality which aids the developer in creating the game. Now let's install the Tilemaps plugin. It's done via the Package Management window, which is available from the File menu. The following steps describe the installation process: Select the 'Online Repository' item from the 'View' dropdown. Select the Tilemaps (Editor) entry from the list. Since it depends on the Tilemaps (Core) package, the latter will be installed as well. Click on 'Install'. This downloads the two packages from the repository. Click on 'Apply'. Dualitor restarts and loads the plugins. Installing the Tilemaps plugin Tileset resource and Tilemap GameObject Creating the Tileset resource Resources are the asset holder objects in Duality. They can be simple imported types such as bitmap information, but some, like the Tileset resource we are going to use now, are generated by the editor. It contains information about rendering, collision, and autotiling properties, as well as a reference to the actual pixel data to be rendered. This pixel data is contained in another resource: the Pixmap. Let's start with a very simple tileset that consists of two solid color tiles, sized 32 x 32. Tileset Download this .png file, and drop it in the Project View. A new Pixmap is created. Next, create a new Tileset from the context menu of the Project View: Add Tileset At this point, the Tileset Resource does not know which Pixmap it should use. First, open the Tileset Editor window from the View menu on the menu bar. The same behavior is invoked when the Tileset is double-clicked in the Project View. The following steps describe how to establish this link: Select the newly created Tileset in the Tileset Editor. Add a new Visual Layer by clicking on the little green plus button. Select the 'Main Texture' layer, if it's not already selected. Drag and drop the Pixmap named 'tileset' into the 'SourceData' area. Since the default value of the tile size is already 32 x 32, there is no need to modify that. Click on 'Apply' to recompile the Resource. A quicker way to do this is by just selecting the 'Create Tileset' option from the Pixmap's context menu, but it is always nice to know what happens in the background. Linking the Pixmap and the Tileset resource Instantiating a Tilemap Duality, similar to many engines, uses a Scene-GameObject-Component model of object hierarchy. To display a tilemap in the scene, a GameObject is needed, with two components attached to it: a Tilemap and a TilemapRenderer. The former contains the actual tilemap information, that is, which tile is located at a specific position. Creating this GameObject is very easy; just drag and drop the Tileset Resource from the Project View to the Scene View. Switch back to the Camera #0 window from the Tileset Editor, and you should see a large green rectangle, which is the tilemap itself. Notice that a new GameObject called 'TilemapRenderer' appears in the Scene View, and it has 3 components attached: a Transform, a Tilemap, and a TilemapRenderer. To edit the Tilemap, the Camera #0 window mode has to be set to 'Tilemap Editor'. After doing that, a new window named 'Tile Palette' appears. The Tileset can be edited with multiple tools—feel free to experiment with them! Editing the Tilemap To be continued In the second part of this tutorial, some C# coding is involved. We will create a custom component to change the tilemap dynamically. Author Lőrinc Serfőző is a software engineer at Graphisoft, the company behind the BIM solution ArchiCAD. He is studying mechatronics engineering at the Budapest University of Technology and Economics, an interdisciplinary field between the more traditional mechanical engineering, electrical engineering, and informatics, and has quickly grown a passion toward software development. He is a supporter of open source and contributes to the C# and OpenGL-based Duality game engine, creating free plugins and tools for users.
Read more
  • 0
  • 0
  • 3129
article-image-how-create-2d-navigation-godot-engine-0
George Marques
18 Nov 2016
6 min read
Save for later

How to Create 2D Navigation with the Godot Engine

George Marques
18 Nov 2016
6 min read
The Godot Engine has built-in functionalities that makes it easy to create navigation in the game world. This post will cover how to make an object follow a fixed path and how to go between two points avoiding the obstacles in the way. Following a fixed path Godot has a couple of nodes that help you create a path that can be followed by another node. One use of this is to make an NPC follow a fixed path in the map. Assuming you have a new project, create a Path2D node. You can then use the controls on the toolbar to make a curve representing the path you will need to follow. Curve buttons After adding the points and adjusting the curves, you will have something like the following: Path curve Now you need to add a PathFollow2D node as a child of Path2D. This will do the actual movement based on the Offset property. Then add an AnimationPlayer node as child of the PathFollow2D. Create a new animation in the player. Set the length to five seconds. Add a keyframe on the start with value 0 for the Unit Offset property of PathFollow2D. You can do that by clicking on the key icon next to the property in the Inspector dock. Then go to the end of the animation and add a frame with the value of 1. This will make Unit Offset go from 0 to 1 in the period of the animation (five seconds in this case). Set the animation to loop and autoplay. To see the effect in practice, add a Sprite node as child of PathFollow2D. You can use the default Godot icon as the texture for it. Enable the Visible Navigation under the Debug Options menu (last button in the top center bar) to make it easier to see. Save the scene and play it to see the Godot robot run around the screen: Sprite following path That's it! Making an object follow a fixed path is quite easy with the built-in resources of Godot. Not even scripting is needed for this example. Navigation and Avoiding Obstacles Sometimes you don't have a fixed path to follow. It might change dynamically or your AI must determine the path and avoid the obstacles and walls that might be in the way. Don't worry because Godot will also help you in this regard. Create a new scene and add a Node2D as the root. Then add a Navigation2D as its child. This will be responsible for creating the paths for you. You now need to add a NavigationPolygonInstance node as child of the Navigation2D. This will hold the polygon used for navigation, to determine what the passable areas are. To create the polygon itself, click on the pencil button on the toolbar (it will appear only if the NavigationPolygonInstance node is selected). The first time you try to add a point, the editor will warn you that there's no NavigationPolygon resource and will offer you to create one. Click on the Create button and all will be set. Navitation resource warning First you need to create the outer boundaries of the navigable area. The polygon can have as many points as you need, but it does need to be a closed polygon. Note that you can right-click on points to remove them and hold Ctrl while clicking on lines to add points. Once you finish the boundaries, click the pencil button again and create polygons inside it to make the impassable areas, such as the walls. You will end up with something like the following: Navigation polygon Add a Sprite node as child of the root Node2D and set the texture of it (you can use the default Godot icon). This will be the object navigating through the space. Now add the following script to the root node. The most important detail here is the get_simple_path function, which returns a list of points to travel from start to end without passing through the walls. extends Node2D # Global variables var start = Vector2() var end = Vector2() var path = [] var speed = 1 var transition = 0 var path_points = 0 func _ready(): # Enable the processing of user input set_process_input(true) # Enable the general process callback set_process(true) func _input(event): # If the user press a mouse button if event.type == InputEvent.MOUSE_BUTTON and event.pressed: if event.button_index == BUTTON_LEFT: # If it's the left button, set the starting point start = event.global_pos elif event.button_index == BUTTON_RIGHT: # If it's the right button, set the ending point end = event.global_pos # Reset the sprite position get_node("Sprite").set_global_pos(start) transition = 0 func _process(delta): # Get the list of points that compose the path path = get_node("Navigation2D").get_simple_path(start, end) # If the path has less points than it did before, reset the transition point if path.size() < path_points: transition = 0 # Update the current amount of points path_points = path.size() # If there's less than 2 points, nothing can be done if path_points < 2: return var sprite = get_node("Sprite") # This uses the linear interpolation function from Vector2 to move the sprite in a constant # rate through the points of the path. Transition is a value from 0 to 1 to be used as a ratio. sprite.set_global_pos(sprite.get_global_pos().linear_interpolate(path[1], transition)) start = sprite.get_global_pos() transition += speed * delta # Reset the transition when it gets to the point. if transition > 1: transition = 0 # Update the node so the _draw() function is called update() func _draw(): # This draw a white circle with radius of 10px for each point in the path for p in path: draw_circle(p, 10, Color(1, 1, 1)) Enable the Visible Navigation in the Debug Options button to help you visualize the effect. Save and run the scene. You can then left-click somewhere to define a starting point, and right-click to define the ending point. The points will be marked as white circles and the Sprite will follow the path, clearing the intermediate points as it travels along. Navigating Godot bot Conclusion The Godot Engine has many features to ease the development of all kinds of games. The navigation functions have many utilities in top-down games, be it an RPG or an RTS. Tilesets also embed navigation polygons that can be used in a similar fashion. About the Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 1
  • 28876

article-image-r-statistical-package-interfacing-python
Janu Verma
17 Nov 2016
8 min read
Save for later

The R Statistical Package Interfacing with Python

Janu Verma
17 Nov 2016
8 min read
One of my coding hobbies is to explore different Python packages and libraries. In this post, I'll talk about the package rpy2, which is used to call R inside python. Being an avid user of R and a huge supporter of R graphical packages, I had always desired to call R inside my Python code to be able to produce beautiful visualizations. The R framework offers machinery for a variety of statistical and data mining tasks. Let's review the basics of R before we delve into R-Python interfacing. R is a statistical language which is free, is open source, and has comprehensive support for various statistical, data mining, and visualization tasks. Quick-R describes it as: "R is an elegant and comprehensive statistical and graphical programming language." R is one of the fastest growing languages, mainly due to the surge in interest in statistical learning and data science. The Data Science Specialization on Coursera has all courses taught in R. There are R packages for machine learning, graphics, text mining, bioinformatics, topics modeling, interactive visualizations, markdown, and many others. In this post, I'll give a quick introduction to R. The motivation is to acquire some knowledge of R to be able to follow the discussion on R-Python interfacing. Installing R R can be downloaded from one of the Comprehensive R Archive Network (CRAN) mirror sites. Running R To run R interactively on the command line, type r. Launch the standard GUI (which should have been included in the download) and type R code in it. RStudio is the most popular IDE for R. It is recommended, though not required, to install RStudio and run R on it. To write a file with R code, create a file with the .r extension (for example, myFirstCode.r). And run the code by typing the following on the terminal: Rscript file.r Basics of R The most fundamental data structure in R is a vector; actually everything in R is a vector (even numbers are 1-dimensional vectors). This is one of the strangest things about R. Vectors contain elements of the same type. A vector is created by using the c() function. a = c(1,2,5,9,11) a [1] 1 2 5 9 11 strings = c("aa", "apple", "beta", "down") strings [1] "aa" "apple" "beta" "down" The elements in a vector are indexed, but the indexing starts at 1 instead of 0, as in most major languages (for example, python). strings[1] [1] "aa" The fact that everything in R is a vector and that the indexing starts at 1 are the main reasons for people's initial frustration with R (I forget this all the time). Data Frames A lot of R packages expect data as a data frame, which are essentially matrices but the columns can be accessed by names. The columns can be of different types. Data frames are useful outside of R also. The Python package Pandas was written primarily to implement data frames and to do analysis on them. In R, data frames are created (from vectors) as follows: students = c("Anne", "Bret", "Carl", "Daron", "Emily") scores = c(7,3,4,9,8) grades = c('B', 'D', 'C', 'A', 'A') results = data.frame(students, scores, grades) results students scores grades 1 Anne 7 B 2 Bret 3 D 3 Carl 4 C 4 Daron 9 A 5 Emily 8 A The elements of a data frame can be accessed as: results$students [1] Anne Bret Carl Daron Emily Levels: Anne Bret Carl Daron Emily This gives a vector, the elements of which can be called by indexing. results$students[1] [1] Anne Levels: Anne Bret Carl Daron Emily Reading Files Most of the times the data is given as a comma-separated values (csv) file or a tab-separated values (tsv) file. We will see how to read a csv/tsv file in R and create a data frame from it. (Aside: The datasets in most Kaggle competitions are given as csv files and we are required to do machine learning on them. In Python, one creates a pandas data frame or a numpy array from this csv file.) In R, we use a read.csv or read.table command to load a csv file into memory, for example, for the Titanic competition on Kaggle: training_data <- read.csv("train.csv", header=TRUE) train <- data.frame(survived=train_all$Survived, age=train_all$Age, fare=train_all$Fare, pclass=train_all$Pclass) Similarly, a tsv file can be loaded as: data <- read.csv("file.tsv";, header=TRUE, delimiter="t") Thus given a csv/tsv file with or without headers, we can read it using the read.csv function and create a data frame using: data.frame(vector_1, vector_2, ... vector_n). This should be enough to start exploring R packages. Another command that is very useful in R is head(), which is similar to the less command on Unix. rpy2 First things first, we need to have both Python and R installed. Then install rpy2 from the Python package index (Pypi). To do this, simply type the following on the command line: pip install rpy2 We will use the high-level interface to R, the robjects subpackage of rpy2. import rpy2.robjects as ro We can pass commands to the R session by putting the R commands in the ro.r() method as strings. Recall that everything in R is a vector. Let's create a vector using robjects: ro.r('x=c(2,4,6,8)') print(ro.r('x')) [1] 2 4 6 8 Keep in mind that though x is an R object (vector), ro.r('x') is a Python object (rpy2 object). This can be checked as follows: type(ro.r('x')) <class 'rpy2.robjects.vectors.FloatVector'> The most important data types in R are data frames, which are essentially matrices. We can create a data frame using rpy2: ro.r('x=c(2,4,6,8)') ro.r('y=c(4,8,12,16)') ro.r('rdf=data.frame(x,y)') This created an R data frame, rdf. If we want to manipulate this data frame using Python, we need to convert it to a python object. We will convert the R data frame to a pandas data frame. The Python package pandas contains efficient implementations of data frame objects in python. import pandas.rpy.common as com df = com.load_data('rdf') print type(df) <class 'pandas.core.frame.DataFrame'> df.x = 2*df.x Here we have doubled each of the elements of the x vector in the data frame df. But df is a Python object, which we can convert back to an R data frame using pandas as: rdf = com.convert_to_r_dataframe(df) print type(rdf) <class 'rpy2.robjects.vectors.DataFrame'> Let's use the plotting machinery of R, which is the main purpose of studying rpy2: ro.r('plot(x,y)') Not only R data types, but rpy2 lets us import R packages as well (given that these packages are installed on R) and use them for analysis. Here we will build a linear model on x and y using the R package stats: from rpy2.robjects.packages import importr stats = importr('stats') base = importr('base') fit = stats.lm('y ~ x', data=rdf) print(base.summary(fit)) We get the following results: Residuals: 1 2 3 4 0 0 0 0 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0 0 NA NA x 2 0 Inf <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0 on 2 degrees of freedom Multiple R-squared: 1, Adjusted R-squared: 1 F-statistic: Inf on 1 and 2 DF, p-value: < 2.2e-16 R programmers will immediately recognize the output as coming from applying linear model function lm() on data. I'll end this discussion with an example using my favorite R package ggplot2. I have written a lot of posts on data visualization using ggplot2. The following example is borrowed from the official documentation of rpy2. import math, datetime import rpy2.robjects.lib.ggplot2 as ggplot2 import rpy2.robjects as ro from rpy2.robjects.packages import importr base = importr('base') datasets = importr('datasets') mtcars = datasets.data.fetch('mtcars')['mtcars'] pp = ggplot2.ggplot(mtcars) + ggplot2.aes_string(x='wt', y='mpg', col='factor(cyl)') + ggplot2.geom_point() + ggplot2.geom_smooth(ggplot2.aes_string(group = 'cyl'), method = 'lm') pp.plot() Author: Janu Verma is a researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology, and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and the Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals and so on. His current focus is on the development of visual analytics systems for prediction and understanding. He advises start-ups and other companies on data science and machine learning in the Delhi-NCR area. He can be found at Here.
Read more
  • 0
  • 0
  • 16961

article-image-how-build-facebook-messenger-bot
Amit Kothari
17 Nov 2016
9 min read
Save for later

How to build a Facebook Messenger Bot

Amit Kothari
17 Nov 2016
9 min read
One of the trending topics in the tech world this year is conversational interfaces. Everyone from big tech companies to start-ups are putting their bets on it. There is an explosion of virtual assistants and chat bots, and the number is increasing every week. What is conversational interface and why are companies investing in it? In this post we will learn about it and build a simple bot using the Facebook Messenger Platform. Conversational Interface A conversational interface is an interface that allows the users to interact with computers using natural language, instead of commands or clicks and taps. It can be voice-based like Siri, Ok Google, or Microsoft's Cortana, or it can be text-based like many Slack or Facebook Messenger bots. Instead of building apps for different platforms, device types, and sizes, companies can now provide a way to interact with their services using voice or text, without the need to download an app or learn how to use it. Facebook Messenger Bot Many apps like Slack, Telegraph, and Skype now offer platforms to build virtual assistants or bots. But with more than 1 billion users, Facebook Messenger is definitely the most popular choice. A Facebook Messenger bot works in the following way: The user sends a message to the bot using their Facebook Messenger account. The Facebook Messenger Platform will receive and post the message to the bot's server using a webhook. The bot server will receive the message, process it, and send a response back to the Facebook Messenger Platform using HTTP POST. Facebook will forward the message to the user. Let's build a simple Facebook Messenger Bot. Creating a Facebook App and Page First we need to create a Facebook Page and a Facebook app: Create a Facebook account, if you do not have one. Create a Facebook Page. This page will work as the identity of your bot. Create a Facebook App. Enter a display name and contact e-mail, and select a category for your bot. After creating the app, click on 'Add product' and then select the 'Messenger' option. Click on the 'Get started' button, and under the 'Token Generation' section, select the page you just created. This will generate a Page Access Token, which the bot server will need to communicate with the Facebook Messenger Platform. Copy this token for later use. You also need to set up webhook so that Facebook can forward the message to the bot server, but before that, let's build the server. Building the Bot Server You can use any language or framework to build the bot server. For this example, I am using Node.js. Let's start by creating an Express.js hello world app. Before we start, we need Node.js and npm installed. Follow the instructions on the Node.js website if you do not have these installed already: Create a new directory for your application, and inside of the app directory, create the package.json file by using the npm init command. Install Express and add it as a dependency using the npm install --save express command. Create a file called index.js with the following code: const app = require('express')(); app.set('port', (process.env.PORT || 5000)); app.get('/', function (req, res) { res.send('Helle World'); }); app.listen(app.get('port'), function () { console.log('App running on port', app.get('port')) }); Our hello world app is done. Run the node index.js command and open http://localhost:5000 in your browser. If all is good, you will see 'Hello World' text on the page. Now that we have the basic app set up, let's add the code to be able to communicate with the Facebook Messenger Platform: Install body-parser and request npm modules using the npm install --save body-parser request command. Body-parser is a middleware to parse incoming request bodies and request is an HTTP client. Now update index.js with the following code. Replace <fb-page-access-token> with the page access token generated before. You will need the validation token while setting up the webhook. Feel free to change it to anything you like: 'use strict' const app = require('express')(); const bodyParser = require('body-parser'); const request = require('request'); const VALIDATION_TOKEN = 'super_secret_validation_token'; const PAGE_ACCESS_TOKEN = '<fb-page-access-token>'; // replace <fb-page-access-token> with your page access token app.set('port', (process.env.PORT || 5000)); app.use(bodyParser.urlencoded({ extended: false })); app.use(bodyParser.json()); app.get('/', function (req, res) { res.send('Helle World'); }); app.get('/webhook', function (req, res) { if (req.query['hub.mode'] === 'subscribe' && req.query['hub.verify_token'] === VALIDATION_TOKEN) { res.status(200).send(req.query['hub.challenge']); } else { res.sendStatus(403); } }); app.post('/webhook/', function (req, res) { // Iterate over each entry req.body.entry.forEach(function (entry) { // Iterate over each messaging event entry.messaging.forEach(function (event) { let sender = event.sender.id if (event.message && event.message.text) { callSendAPI(sender, `Message received ${event.message.text}`); } }); res.sendStatus(200); // Respond with 200 to notify the message is received by the webhook successfully }); }); function callSendAPI(sender, text) { request({ url: 'https://graph.facebook.com/v2.6/me/messages', qs: { access_token: PAGE_ACCESS_TOKEN }, method: 'POST', json: { recipient: { id: sender }, message: { text: text }, } }, function (error, response, body) { if (error) { console.error('Unable to send message', error); } }); } app.listen(app.get('port'), function () { console.log('App running on port', app.get('port')) }); Here we have defined two new service endpoints. The first one responds to the GET request and will be used for verification when we set up the webhook. The second endpoint will respond to the POST request. This will process the incoming message and send a response using the callSendAPI method. For now, the server will simply send back the message received from the user. You can deploy the server anywhere you like. Because of its simplicity, I am going to use Heroku for this tutorial. Deploying the App to Heroku If you do not have an account, sign up on heroku.com and then install Heroku CLI. Initialize a git repository and commit the code to easily push it to Heroku. Use git init to initialise and git add . followed by git commit -m "Initial commit" to commit the code. Create a Procfile file with the following content. Procfile declares what commands to run on heroku. web: node index.js. Run the heroku create command to create and set up a new app on Heroku. Now you can run the app locally using the heroku local command. To deploy the app on Heroku, use git push heroku master and use heroku open to open the deployed app in the browser. Setting up the Messenger Webhook Now that the server is ready, we can set up the webhook: Go back to the app page; under the Messenger settings page, click on the 'Setup Webhooks' button. Enter the bot server webhook url (https:///webhook), and the validation token, and select all the options under subscription fields. We are ready to test our bot. Go to the Facebook page you created, click on message, and send a message to the bot. If everything is good, you will see a response from the bot. The bot is working, but we can update it to send a different response based on the user message. Let's create a simple bot for Game of Thrones fans, where if a user types in one of the great house's names, the bot will respond with their house words. Update index.js with the following code: 'use strict' const app = require('express')(); const bodyParser = require('body-parser'); const request = require('request'); const VALIDATION_TOKEN = 'super_secret_validation_token'; const PAGE_ACCESS_TOKEN = '<fb-page-access-token>'; // replace <fb-page-access-token> with your page access token const HOUSE_WORDS = { STARK: 'Winter is Coming', LANNISTER: 'Hear Me Roar!', BARATHEON: 'Ours is the Fury', TULLY: 'Family, Duty, Honor', GREYJOY: 'We Do Not Sow', MARTELL: 'Unbowed, Unbent, Unbroken', TYRELL: 'Growing Strong', FREY: 'We Stand Together', } app.set('port', (process.env.PORT || 5000)); app.use(bodyParser.urlencoded({ extended: false })); app.use(bodyParser.json()); app.get('/', function (req, res) { res.send('Helle World'); }); app.get('/webhook', function (req, res) { if (req.query['hub.mode'] === 'subscribe' && req.query['hub.verify_token'] === VALIDATION_TOKEN) { res.status(200).send(req.query['hub.challenge']); } else { res.sendStatus(403); } }); app.post('/webhook/', function (req, res) { // Iterate over each entry req.body.entry.forEach(function (entry) { // Iterate over each messaging event entry.messaging.forEach(function (event) { let sender = event.sender.id if (event.message && event.message.text) { callSendAPI(sender, getHouseWord(event.message.text)); } }); res.sendStatus(200); // Respond with 200 to notify the message is received by the webhook successfully }); }); function callSendAPI(sender, text) { request({ url: 'https://graph.facebook.com/v2.6/me/messages', qs: { access_token: PAGE_ACCESS_TOKEN }, method: 'POST', json: { recipient: { id: sender }, message: { text: text }, } }, function (error, response, body) { if (error) { console.error('Unable to send message', error); } }); } function getHouseWord(text) { const house = Object.keys(HOUSE_WORDS) .find(function (houseName) { return text.toUpperCase().indexOf(houseName) !== -1; }); return house ? HOUSE_WORDS[house] : 'No house word found :('; } app.listen(app.get('port'), function () { console.log('App running on port', app.get('port')) }); Now if you type one of the family names defined in our code, the bot will respond with the family words. So if you type 'stark', the bot will respond 'Winter is Coming'. App Review While you and other page admins can chat with the bot, to make it publicly available, you have to submit the bot for review. This can be done from the app page's 'App Review' option. Fill out the form, and once approved, your bot will be accessible by all Facebook Messenger users. Although our bot is working, it is not truly conversational. It only responds to a fix set of commands. To build a smarter bot, we need natural language processing. Natural language processors convert natural language text or audio to a format that computers can understand. There are many services that allow us to write smarter bot engines, and one of them is Wit. Wit was acquired by Facebook in 2015. You can read more about Wit and how to integrate it with your bot here. I hope you enjoyed this post. If you have built any bot on Facebook or any other platform, please share it with us in the comment section below. Author: Amit Kothari is a full-stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software, mainly in Java/JEE. His recent experience is in building web applications using JavaScript frameworks like React and AngularJS, and backend micro services / REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 0
  • 5500
article-image-kotlin-basics
Packt
16 Nov 2016
7 min read
Save for later

Kotlin Basics

Packt
16 Nov 2016
7 min read
In this article by Stephen Samuel and Stefan Bocutiu, the authors of the book Programming Kotlin, it’s time to discover the fundamental building blocks of Kotlin. This article will cover the basic constructs of the language, such as defining variables, control flow syntax, type inference, and smart casting, and its basic types and their hierarchy. (For more resources related to this topic, see here.) For those coming from a Java background, this article will also highlight some of the key differences between Kotlin and Java and how Kotlin’s language features are able to exist on the JVM. For those who are not existing Java programmers, then those differences can be safely skipped. vals and vars Kotlin has two keywords for declaring variables: val and var. A var is a mutable variable—a variable that can be changed to another value by reassigning it. This is equivalent to declaring a variable in Java. val name = “kotlin” Alternatively, the var can be initialized later: var name: String name = “kotlin” Variables defined with var can be reassigned since they are mutable: var name = “kotlin” name = “more kotlin” The val keyword is used to declare a read-only variable. This is equivalent to declaring a final variable in Java. A val must be initialized when created since it cannot be changed later: val name = “kotlin” A read-only variable does not mean the instance itself is automatically immutable. The instance may still allow its member variables to be changed via functions or properties. But the variable itself cannot change its value or be reassigned to another value. Type inference Did you notice in the previous section that the type of the variable was not included when it was initialized? This is different to Java, where the type of the variable must always accompany its declaration. Even though Kotlin is a strongly typed language, we don’t always need to declare types explicitly. The compiler can attempt to figure out the type of an expression from the information included in the expression. A simple val is an easy case for the compiler because the type is clear from the right-hand side. This mechanism is called type inference. This reduces boilerplate while keeping the type safety we expect of a modern language. Values and variables are not the only places where type inference can be used. It can also be used in closures where the type of the parameter(s) can be inferred from the function signature. It can also be used in single-line functions, where the return value can be inferred from the expression in the function, as this example demonstrates: fun plusOne(x: Int) = x + 1 Sometimes, it is helpful to add type inference if the type inferred by the compiler is not exactly what you want: val explicitType: Number = 12.3 Basic types One of the big differences between Kotlin and Java is that in Kotlin, everything is an object. If you come from a Java background, then you will already be aware that in Java, there are special primitive types, which are treated differently from objects. They cannot be used as generic types, do not support method/function calls, and cannot be assigned null. An example is the boolean primitive type. Java introduced wrapper objects to offer a workaround in which primitive types are wrapped in objects so that java.lang. Boolean wraps a boolean in order to smooth over the distinctions. Kotlin removes this necessity entirely from the language by promoting the primitives to full objects. Whenever possible, the Kotlin compiler will map basic types back to JVM primitives for performance reasons. However, the values must sometimes be boxed, such as when the type is nullable or when it is used in generics. Two different values that are boxed might not use the same instance, so referential equality is not guaranteed on boxed values. Numbers The built-in number types are as follows: Type Width long 64 int 32 short 16 byte 8 double 64 float 32 To create a number literal, use one of the following forms: val int = 123 val long = 123456L val double = 12.34 val float = 12.34F val hexadecimal = 0xAB val binary = 0b01010101 You will notice that a long value requires the suffix L and a float, F. The double type is used as the default for floating point numbers, and int for integral numbers. The hexadecimal and binary use the prefixes 0x and 0b respectively. Kotlin does not support the automatic widening of numbers, so conversion must be invoked explicitly. Each number has a function that will convert the value to one of the other number types: val int = 123 val long = int.toLong() val float = 12.34F val double = float.toDouble() The full set of methods for conversions between types is as follows: toByte() toShort() toInt() toLong() toFloat() toDouble() toChar() Unlike Java, there are no built-in bitwise operators, but named functions instead. This can be invoked like operators (except inverse): val leftShift = 1 shl 2 val rightShift = 1 shr 2 val unsignedRightShift = 1 ushr 2 val and = 1 and 0x00001111 val or = 1 and 0x00001111 val xor = 1 xor 0x00001111 val inv = 1.inv() Booleans Booleans are rather standard and support the usual negation, conjunction and disjunction operations. Conjunction and disjunction are lazily evaluated. So if the left-hand side satisfies the clause, then the right-hand side will not be evaluated: val x = 1 val y = 2 val z = 2 val isTrue = x < y && x < z val alsoTrue = x == y || y == z Chars Chars represent a single character. Character literals use single quotes, such as a or Z. Chars also support escaping for the following characters: t, b, n, r, , , \, $. All Unicode characters can be represented using the respective Unicode number, like so: u1234. Note that the char type is not treated as a number, unlike Java. Strings Just as in Java, strings are immutable. String literals can be created using double or triple quotes. Double quotes create an escaped string. In an escaped string, special characters such as newline must be escaped: val string = “string with n new line” Triple quotes create a raw string. In a raw string, no escaping is necessarily, and all characters can be included. val rawString = “““ raw string is super useful for strings that span many lines “““ Strings also provide an iterator function, so they can be used in a for loop. Arrays In Kotlin, we can create an array using the arrayOf() library function: val array = arrayOf(1, 2, 3) Alternatively, we can create an array from an initial size and a function that is used to generate each element: val perfectSquares = Array(10, { k -> k * k }) Unlike Java, arrays are not treated specially by the language and are regular collection classes. Instances of Array provide an iterator function and a size function as well as a get and set function. The get and set functions are also available through bracket syntax like many C style languages: val element1 = array[0] val element2 = array[1] array[2] = 5 To avoid boxing types that will ultimately be represented as primitives in the JVM, Kotlin provides alternative array classes that are specialized for each of the primitive types. This allows performance-critical code to use arrays as efficiently as they would do in plain Java. The provided classes are ByteArray, CharArray, ShortArray, IntArray, LongArray, BooleanArray, FloatArray, and DoubleArray. Comments Comments in Kotlin will come as no surprise to most programmers as they are the same as Java, Javascript, and C, among other languages. Block comments and line comments are supported: // line comment /* A block comment can span many lines */ Packages Packages allow us to split code into namespaces. Any file may begin with a package declaration: package com.packt.myproject class Foo fun bar(): String = “bar” The package name is used to give us the fully-qualified name (FQN) for a class, object, interface, or function. In the previous example, the Foo class has the FQN com.packt.myproject.Foo, and the top-level function bar has the FQN com.packt.myproject.bar. Summary In Kotlin, everything is an object in the sense that we can call member functions and properties on any variable. Some types are built in because their implementation is optimized, but to the user, they look like ordinary classes. In this article, we described most of these types: numbers, characters, booleans, and arrays. Resources for Article: Further resources on this subject: Responsive Applications with Asynchronous Programming [Article] Asynchronous Programming in F# [Article] Go Programming Control Flow [Article]
Read more
  • 0
  • 0
  • 6944

article-image-setting-development-environment-android-wear-applications
Packt
16 Nov 2016
8 min read
Save for later

Setting up Development Environment for Android Wear Applications

Packt
16 Nov 2016
8 min read
"Give me six hours to chop down a tree and I will spend the first four sharpening the axe." -Abraham Lincoln In this article by Siddique Hameed, author of the book, Mastering Android Wear Application Development, they have discussed the steps, topics and process involved in setting up the development environment using Android Studio. If you have been doing Android application development using Android Studio, some of the items discussed here might already be familiar to you. However, there are some Android Wear platform specific items that may be of interest to you. (For more resources related to this topic, see here.) Android Studio Android Studio Integrated Development Environment (IDE) is based on IntelliJ IDEA platform. If you had done Java development using IntelliJ IDEA platform, you'll be feeling at home working with Android Studio IDE. Android Studio platform comes bundled with all the necessary tools and libraries needed for Android application development. If this is the first time you are setting up Android Studio on your development system, please make sure that you have satisfied all the requirements before installation. Please refer developer site (http://developer.android.com/sdk/index.html#Requirements) for checking on the items needed for the operating system of your choice. Please note that you need at least JDK version 7 installed on your machine for Android Studio to work. You can verify your JDK's version that by typing following commands shown in the following terminal window: If your system does not meet that requirement, please upgrade it using the method that is specific to your operating system. Installation Android Studio platform includes Android Studio IDE, SDK Tools, Google API Libraries and systems images needed for Android application development. Visit the http://developer.android.com/sdk/index.html URL for downloading Android Studio for your corresponding operating system and following the installation instruction. Git and GitHub Git is a distributed version control system that is used widely for open-source projects. We'll be using Git for sample code and sample projects as we go along the way. Please make sure that you have Git installed on your system by typing the command as shown in the following a terminal window. If you don't have it installed, please download and install it using this link for your corresponding operating system by visting https://git-scm.com/downloads. If you are working on Apple's Macintosh OS like El Capitan or Yosemite or Linux distributions like Ubuntu, Kubuntu or Mint, chances are you already have Git installed on it. GitHub (http://github.com) is a free and popular hosting service for Git based open-source projects. They make checking out and contributing to open-source projects easier than ever. Sign up with GitHub for a free account if you don't have an account already. We don't need to be an expert on Git for doing Android application development. But, we do need to be familiar with basic usages of Git commands for working with the project. Android Studio comes by default with Git and GitHub integration. It helps importing sample code from Google's GitHub repository and helps you learn by checking out various application code samples. Gradle Android application development uses Gradle (http://gradle.org/)as the build system. It is used to build, test, run and package the apps for running and testing Android applications. Gradle is declarative and uses convention over configuration for build settings and configurations. It manages all the library dependencies for compiling and building the code artifacts. Fortunately, Android Studio abstracts most of the common Gradle tasks and operations needed for development. However, there may be some cases where having some extra knowledge on Gradle would be very helpful. We won't be digging into Gradle now, we'll be discussing about it as and when needed during the course of our journey. Android SDK packages When you install Android Studio, it doesn't include all the Android SDK packages that are needed for development. The Android SDK separates tools, platforms and other components & libraries into packages that can be downloaded, as needed using the Android SDK Manager. Before we start creating an application, we need to add some required packages into the Android SDK. Launch SDK Manager from Android Studio Tools | Android | SDK Manager. Let's quickly go over a few items from what's displayed in the preceding screenshot. As you can see, the Android SDK's location is /opt/android-sdk on my machine, it may very well be different on your machine depending on what you selected during Android Studio installation setup. The important point to note is that the Android SDK is installed on a different location than the Android Studio's path (/Applications/Android Studio.app/). This is considered a good practice because the Android SDK installation can be unaffected depending on a new installation or upgrade of Android Studio or vice versa. On the SDK Platforms tab, select some recent Android SDK versions like Android 6.0, 5.1.1, and 5.0.1. Depending on the Android versions you are planning on supporting in your wearable apps, you can select other older Android versions. Checking on Show Package Details option on the bottom right, the SDK Manager will show all the packages that will be installed for a given Android SDK version. To be on the safer side, select all the packages. As you may have noticed already Android Wear ARM and Intel system images are included in the package selection. Now when you click on SDK Tools tab, please make sure the following items are selected: Android SDK Build Tools Android SDK Tools 24.4.1 (Latest version) Android SDK Platform-Tools Android Support Repository, rev 25 (Latest version) Android Support Library, rev 23.1.1 (Latest version) Google Play services, rev 29 (Latest version) Google Repository, rev 24 (Latest version) Intel X86 Emulator Accelerator (HAXM installer), rev 6.0.1 (Latest version) Documentation for Android SDK (Optional) Please do not change anything on SDK Update Sites. Keep the update sites as it was configured by default. Clicking on OK will take some time downloading and installing all the components and packages selected. Android Virtual Device Android Virtual Devices will enable us to test the code using Android Emulators. It lets us pick and choose various Android system target versions and form factors needed for testing. Launch Android Virtual Device Manager from Tools | Android | AVD Manager From Android Virtual Device Manager window, click on Create New Virtual Device button on the bottom left and proceed to the next screen and select Wear category Select Marshmallow API Level 23 on x86 and everything else as default, as shown in the following screenshot: Note that the current latest Android version is Marshmallow of API level 23 at the time of this writing. It may or may not be the latest version while you are reading this article. Feel free to select the latest version that is available during that time. Also, if you'd like to support or test in earlier Android versions, please feel free to do so in that screen. After the virtual device is selected successfully, you should see that listed on the Android Virtual Devices list as show in the following screenshot: Although it's not required to use real Android Wear device during development, sometimes it may be convenient and faster developing it in a real physical device. Let's build a skeleton App Since we have all the components and configurations needed for building wearable app, let's build a skeleton app and test out what we have so far. From Android Studio's Quick Start menu, click on Import an Android code sample tab: Select the Skeleton Wearable App from Wearable category as shown in following screenshot: Click Next and select your preferred project location. As you can see the skeleton project is cloned from Google's sample code repository from GitHub. Clicking on Finish button will pull the source code and Android Studio will compile and build the code and get it ready for execution. The following screenshot indicates that the Gradle build finished successfully without any errors. Click on Run configuration to run the app: When the app starts running, Android Studio will prompt us to select the deployment targets, we can select the emulator we created earlier and click OK. After the code compiles and uploaded to the emulator, the main activity of the Skeleton App will be launched as shown below: Clicking on SHOW NOTIFICATION will show the notification as below: Clicking on START TIMER will start the timer and run for five seconds and clicking on Finish Activity will close the activity take the emulator to the home screen. Summary We discussed the process involved in setting up the Android Studio development environment by covering the installation instruction, requirements and SDK tools, packages and other components needed for Android Wear development. We also checked out source code for Skeleton Wearable App from Google's sample code repository and successfully ran and tested it on Android device emulator. Resources for Article: Further resources on this subject: Building your first Android Wear Application [Article] Getting started with Android Development [Article] The Art of Android Development Using Android Studio [Article]
Read more
  • 0
  • 0
  • 7841
Modal Close icon
Modal Close icon