Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-structuring-your-projects
Packt
24 Nov 2016
20 min read
Save for later

Structuring Your Projects

Packt
24 Nov 2016
20 min read
In this article written by Serghei Iakovlev and David Schissler, authors of the book Phalcon Cookbook , we will cover: (For more resources related to this topic, see here.) Choosing the best place for an implementation Automation of routine tasks Creating the application structure by using code generation tools Introduction In this article you will learn that, often, by creating new projects, developers can face some issues such as what components should they create, where to place them in the application structure, what would each component implement, what naming convention to follow, and so on. Actually, creating custom components isn't a difficult matter; we will sort it out in this article. We will create our own component which will display different menus on your site depending on where we are in the application. From one project to another, the developer's work is usually repeated. This holds true for tasks such as creating the project structure, configuration, creating data models, controllers, views, and so on. For those tasks, we will discover the power of Phalcon Developer Tools and how to use them. You will learn how to create an application skeleton by using one command, and even how to create a fully functional application prototype in less than 10 minutes without writing a single line of code. Developers often come up against a situation where they need to create a lot of predefined code templates. Until you are really familiar with the framework it can be useful for you to do everything manually. But anyway all of us would like to reduce repetitive tasks. Phalcon tries to help you by providing an easy and at the same time flexible code generation tool named Phalcon Developer Tools. These tools help you simplify the creation of CRUD components for a regular application. Therefore, you can create working code in a matter of seconds without creating the code yourself. Often, when creating an application using a framework, we need to extend or add functionality to the framework components. We don't have to reinvent the wheel by rewriting those components. We can use class inheritance and extensibility, but often this approach does not work. In such cases, it is better to use additional layers between the main application and the framework by creating a middleware layer. The term middleware has a wide range of meanings, but in the context of PHP web applications it means code, which will be called in turns by each request. We will look into the main principles of creating and using middleware in your application. We will not get into each solution in depth, but instead we will work with tasks that are common for most projects, and implementations extending Phalcon. Choosing the best place for an implementation Let's pretend you want to add a custom component. As the case may be, this component allows to change your site navigation menu. For example, when you have a Sign In link on your navigation menu and you are logged in, that link needs to change to Sign Out. Then you're asking yourself where is the best place in the project to put the code, where to place the files, how to name the classes, how to make them autoload by the autoloader. Getting ready… For successful implementation of this recipe you must have your application deployed. By this we mean that you need to have a web server installed and configured for handling requests to your application, an application must be able to receive requests, and have implemented the necessary components such as Controllers, Views, and a bootstrap file. For this recipe, we assume that our application is located in the apps directory. If this is not the case, you should change this part of the path in the examples shown in this article. How to do it… Follow these steps to complete this recipe: Create the /library/ directory app, if you haven't got one, where user components will be stored. Next, create the Elements (app/library/Elements.php) component. This class extends PhalconMvcUserComponent. Generally, it is not necessary, but it helps get access to application services quickly. The contents of Elements should be: <?php namespace Library; use PhalconMvcUserComponent; use PhalconMvcViewSimple as View; class Elements extends Component { public function __construct() { // ... } public function getMenu() { // ... } } Now we register this class in the Dependency Injection Container. We use a shared instance in order to prevent creating new instances by each service resolving: $di->setShared('elements', function () { return new LibraryElements(); }); If your Session service is not initialized yet, it's time to do it in your bootstrap file. We use a shared instance for the following reasons: $di->setShared('session', function () { $session = new PhalconSessionAdapterFiles(); $session->start(); return $session; }); Create the templates directory within the directory with your views/templates. Then you need to tell the class autoloader about a new namespace, which we have just entered. Let's do it in the following way: $loader->registerNamespaces([ // The APP_PATH constant should point // to the project's root 'Library' => APP_PATH . '/apps/library/', // ... ]); Add the following code right after the tag body in the main layout of your application: <div class="container"> <div class="navbar navbar-inverse"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#blog-top- menu" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Blog 24</a> </div> <?php echo $this->elements->getMenu(); ?> </div> </div> </div> Next, we need to create a template for displaying your top menu. Let's create it in views/templates/topMenu.phtml: <div class="collapse navbar-collapse" id="blog-top-menu"> <ul class="nav navbar-nav"> <li class="active"> <a href="#">Home</a> </li> </ul> <ul class="nav navbar-nav navbar-right"> <li> <?php if ($this->session->get('identity')): ?> <a href="#">Sign Out</a> <?php else: ?> <a href="#">Sign In</a> <?php endif; ?> </li> </ul> </div> Now, let's put the component to work. First, create the protected field $simpleView and initialize it in the controller: public function __construct() { $this->simpleView = new View(); $this->simpleView->setDI($this->getDI()); } And finally, implement the getMenu method as follows: public function getMenu() { $this->simpleView->setViewsDir($this->view- >getViewsDir()); return $this->simpleView->render('templates/topMenu'); } Open the main page of your site to ensure that your top menu is rendered. How it works… The main idea of our component is to generate a top menu, and to display the correct menu option depending on the situation, meaning whether the user is authorized or not. We create the user component, Elements, putting it in a place specially designed for the purpose. Of course, when creating the directory library and placing a new class there, we should tell the autoloader about a new namespace. This is exactly what we have done. However, we should take note of one important peculiarity. We should note that if you want to get access to your components quickly even in HTML templates like $this->elements, then you should put the components in the DI container. Therefore, we put our component, LibraryElements, in the container named elements. Since our component inherits PhalconMvcUserComponent, we are able to access all registered application services just by their names. For example, the following instruction, $this->view can be written in a long form, $this->getDi()->getShared('view'), but the first one is obviously more concise. Although not necessary, for application structure purposes, it is better to use a separate directory for different views not connected straight to specific controllers and actions. In our case, the directory views/templates serves for this purpose. We create a HTML template for menu rendering and place it in views/templates/topMenu.phtml. When using the method getMenu, our component will render the view topMenu.phtml and return HTML. In the method getMenu, we get the current path for all our views and set it for the PhalconMvcViewSimple component, created earlier in the constructor. In the view topMenu we access to the session component, which earlier we placed in the DI container. By generating the menu, we check whether the user is authorized or not. In the former case, we use the Sign out menu item, in the latter case we display the menu item with an invitation to Sign in. Automation of routine tasks The Phalcon project provides you with a great tool named Developer Tools. It helps automating repeating tasks, by means of code generation of components as well as a project skeleton. Most of the components of your application can be created only with one command. In this recipe, we will consider in depth the Developer Tools installation and configuration. Getting Ready… Before you begin to work on this recipe, you should have a DBMS configured, a web server installed and configured for handling requests from your application. You may need to configure a virtual host (this is optional) for your application which will receive and handle requests. You should be able to open your newly-created project in a browser at http://{your-host-here}/appname or http://{your-host-here}/, where your-host-here is the name of your project. You should have Git installed, too. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as a particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: Clone Developer Tools in your home directory: git clone git@github.com:phalcon/phalcon-devtools.git devtools Go to the newly created directory devtools, run the./phalcon.sh command, and wait for a message about successful installation completion: $ ./phalcon.sh Phalcon Developer Tools Installer Make sure phalcon.sh is in the same dir as phalcon.php and that you are running this with sudo or as root. Installing Devtools... Working dir is: /home/user/devtools Generating symlink... Done. Devtools installed! Run the phalcon command without arguments to see the available command list and your current Phalcon version: $ phalcon Phalcon DevTools (3.0.0) Available commands: commands (alias of: list, enumerate) controller (alias of: create-controller) model (alias of: create-model) module (alias of: create-module) all-models (alias of: create-all-models) project (alias of: create-project) scaffold (alias of: create-scaffold) migration (alias of: create-migration) webtools (alias of: create-webtools) Now, let's create our project. Go to the folder where you plan to create the project and run the following command: $ phalcon project myapp simple Open the website which you have just created with the previous command in your browser. You should see a message about the successful installation. Create a database for your project: mysql -e 'CREATE DATABASE myapp' -u root -p You will need to configure our application to connect to the database. Open the file app/config/config.php and correct the database connection configuration. Draw attention to the baseUri: parameter if you have not configured your virtual host according to your project. The value of this parameter must be / or /myapp/. As the result, your configuration file must look like this: <?php use PhalconConfig; defined('APP_PATH') || define('APP_PATH', realpath('.')); return new Config([ 'database' => [ 'adapter' => 'Mysql', 'host' => 'localhost', 'username' => 'root', 'password' => '', 'dbname' => 'myapp', 'charset' => 'utf8', ], 'application' => [ 'controllersDir' => APP_PATH . '/app/controllers/', 'modelsDir' => APP_PATH . '/app/models/', 'migrationsDir' => APP_PATH . '/app/migrations/', 'viewsDir' => APP_PATH . '/app/views/', 'pluginsDir' => APP_PATH . '/app/plugins/', 'libraryDir' => APP_PATH . '/app/library/', 'cacheDir' => APP_PATH . '/app/cache/', 'baseUri' => '/myapp/', ] ]); Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); After that we need to create a new controller, UsersController. This controller must provide us with the main CRUD actions on the Users model and, if necessary, display data with the appropriate views. Lets do it with just one command: $ phalcon scaffold users In your web browser, open the URL associated with your newly created resource User and try to find one of the users of our database table at http://{your-host-here}/appname/users (or http://{your-host-here}/users, depending on how you have configured your server for application request handling. Finally, open your project in your file manager to see all the project structure created with Developer Tools: +-- app ¦ +-- cache ¦ +-- config ¦ +-- controllers ¦ +-- library ¦ +-- migrations ¦ +-- models ¦ +-- plugins ¦ +-- schemas ¦ +-- views ¦ +-- index ¦ +-- layouts ¦ +-- users +-- public +-- css +-- files +-- img +-- js +-- temp How it works… We installed Developer Tools with only two commands, git clone and ./phalcon. This is all we need to start using this powerful code generation tool. Next, using only one command, we created a fully functional application environment. At this stage, the application doesn't represent something outstanding in terms of features, but we have saved time from manually creating the application structure. Developer Tools did that for you! If after this command completion you examine your newly created project, you may notice that the primary application configuration has been generated also, including the bootstrap file. Actually, the phalcon project command has additional options that we have not demonstrated in this recipe. We are focusing on the main commands. Enter the command help to see all available project creating options: $ phalcon project help In the modern world, you can hardly find a web application which works without access to a database. Our application isn't an exception. We created a database for our application, and then we created a users table and filled it with primary data. Of course, we need to supply our application with what we have done in the app/config/config.php file with the database access parameters as well as the database name. After the successful database and table creation, we used the scaffold command for the pre-defined code template generation, particularly the Users controller with all main CRUD actions, all the necessary views, and the Users model. As before, we have used only one command to generate all those files. Phalcon Developer Tools is equipped with a good amount of different useful tools. To see all the available options, you can use the command help. We have taken only a few minutes to create the first version of our application. Instead of spending time with repetitive tasks (such as the creation of the application skeleton), we can now use that time to do more exciting tasks. Phalcon Developer Tools helps us save time where possible. But wait, there is more! The project is evolving, and it becomes more featureful day by day. If you have any problems you can always visit the project on GitHub https://github.com/phalcon/phalcon-devtools and search for a solution. There's more… You can find more information on Phalcon Developer Tools installation for Windows and OS X at: https://docs.phalconphp.com/en/latest/reference/tools.html. More detailed information on web server configuration can be found at: https://docs.phalconphp.com/en/latest/reference/install.html Creating the application structure by using code generation tools In the following recipe, we will discuss available code generation tools that can be used for creating a multi-module application. We don't need to create the application structure and main components manually. Getting Ready… Before you begin, you need to have Git installed, as well as any DBMS (for example, MySQL, PostgreSQL, SQLite, and the like), the Phalcon PHP extension (usually it is named php5-phalcon) and a PHP extension, which offers database connectivity support using PDO (for example, php5-mysql, php5-pgsql or php5-sqlite, and the like). You also need to be able to create tables in your database. To accomplish the following recipe, you will require Phalcon Developer Tools. If you already have it installed, you may skip the first three steps related to the installation and go to the fourth step. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: First you need to decide where you will install Developer Tools. Put the case that you are going to place Developer Tools in your home directory. Then, go to your home directory and run the following command: git clone git@github.com:phalcon/phalcon-devtools.git Now browse to the newly created phalcon-devtools directory and run the following command to ensure that there are no problems: ./phalcon.sh Now, as far as you have Developer Tools installed, browse to the directory, where you intend to create your project, and run the following command: phalcon project blog modules If there were no errors during the previous step, then create a Help Controller by running the following command: phalcon controller Help --base-class=ControllerBase — namespace=Blog\Frontend\Controllers Open the newly generated HelpController in the apps/frontend/controllers/HelpController.php file to ensure that you have the needed controller, as well as the initial indexAction. Open the database configuration in the Frontend module, blog/apps/frontend/config/config.php, and edit the database configuration according to your current environment. Enter the name of an existing database user and a password that has access to that database, and the application database name. You can also change the database adapter that your application needs. If you do not have a database ready, you can create one now. Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); Next, let's create Controller, Views, Layout, and Model by using the scaffold command: phalcon scaffold users --ns- controllers=Blog\Frontend\Controllers —ns- models=Blog\Frontend\Models Open the newly generated UsersController located in the apps/frontend/controllers/UsersController.php file to ensure you have generated all actions needed for user search, editing, creating, displaying, and deleting. To check if all actions work as designed, if you have a web server installed and configured for this recipe, you can go to http://{your-server}/users/index. In so doing, you can make sure that the required Users model is created in the apps/frontend/models/Users.php file, all the required views are created in the apps/frontend/views/users folder, and the user layout is created in the apps/frontend/views/layouts folder. If you have a web server installed and configured for displaying the newly created site, go to http://{your-server}/users/search to ensure that the users from our table are shown. How it works… In the world of programming, code generation is designed to lessen the burden of manually creating repeated code by using predefined code templates. The Phalcon framework provides perfect code generation tools which come with Phalcon Developer Tools. We start with the installation of Phalcon Developer Tools. Note, that if you already have Developer Tools installed, you should skip the steps involving these installation. Next, we generate a fully functional MVC application, which implements the multi-module principle. One command is enough to get a working application at once. We save ourselves the trouble of creating the application directory structure, creating the bootstrap file, creating all the required files, and setting up the initial application structure. For that end, we use only one command. It's really great, isn't it? Our next step is creating a controller. In our example, we use HelpController, which displays just such an approach to creating controllers. Next, we create the table users in our database and fill it with data. With that done, we use a powerful tool for generating predefined code templates, which is called Scaffold. Using only one command in the Terminal, we generate the controller UsersController with all the necessary actions and appropriate views. Besides this, we get the Users model and required layout. If you have a web server configured you can check out the work of Developer Tools at http://{your-server}/users/index. When we use the Scaffold command, the generator determines the presence and names of our table fields. Based on these data, the tool generates a model, as well as views with the required fields. The generator provides you with ready-to-use code in the controller, and you can change this code according to your needs. However, even if you don't change anything, you can use your controller safely. You can search for users, edit and delete them, create new users, and view them. And all of this was made possible with one command. We have discussed only some of the features of code generation. Actually, Phalcon Developer Tools has many more features. For help on the available commands you can use the command phalcon (without arguments). There's more… For more detailed information on installation and configuration of PDO in PHP, visit http://php.net/manual/en/pdo.installation.php. You can find detailed Phalcon Developer Tools installation instructions at https://docs.phalconphp.com/en/latest/reference/tools.html. For more information on Scaffold, refer to https://en.wikipedia.org/wiki/Scaffold_(programming). Summary In this article, you learned about the automation of routine tasks andcreating the application structure. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers [Article] Phalcon's ORM [Article] Planning and Structuring Your Test-Driven iOS App [Article]
Read more
  • 0
  • 0
  • 11551

article-image-client-side-validation-jquery-validation-plugin
Jabran Rafique
23 Nov 2016
9 min read
Save for later

Client-Side Validation with the jQuery Validation Plugin

Jabran Rafique
23 Nov 2016
9 min read
Form validation is a critical procedure in data collection. A correct validation in place makes sure that forms collect correct and expected data. Validation must be done on server side and optionally on client side. Server-side validation is robust and secure because a user will not access to modify its behaviour. However, client-side validation can be tempered easily. Applications relying on client-side validation completely and bypassing at server side are more open to security threats with data exploits. Client-side validation is about giving users a better experience. This is so that users don't have to go through everything on the page or submit a whole form to find out that they have one or even a few entries incorrect, but rather alerting users instantly to correct mistakes in place. Modern browsers enforce client-side validation by default for form fields in an HTML5 document with certain attributes (that is, required). There are cross-browser limitations on how these validation messages can be styled, positioned, and labeled. JavaScript plays a vital role in client-side form validation. There are many different ways to validate a form using JavaScript. JavaScript libraries such as jQuery also provide a number of ways to validate a form. If a project is using any such library already, it would easier to utilize the library's form validation methods. jQuery validation is a jQuery plugin that makes it convenient to validate a HTML form. We will take a quick look at it below. Installation There are a number of following ways to install this library. Download directly from GitHub Use CDN hosted files Using bower $ bower install jquery-validation --save For simplicity of this tutorial's demo we will use CDN hosted files. Usage Now that we have installed jQuery Validation we start by adding it to our HTML document. Here is the HTML for our demo app: <!DOCTYPE html> <html> <head> <title>Learn jQuery Validation</title> </head> <body> <div class="container"> <h1>Learn jQuery Validation</h1> <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name"> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name"> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address"> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> </div> <script src="https://code.jquery.com/jquery-2.2.4.min.js" integrity="sha256-BbhdlvQf/xTY9gja0Dq3HiwQF8LaCRTXxZKRutelT44=" crossorigin="anonymous"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/jquery-validate/1.15.0/jquery.validate.min.js)></script> </body> </html> All jQuery validation plugin methods are instantly available after the page is loaded. In the following example we add required attribute to our form fields, which will be used by the jQuery Validation plugin for basic validation: ... <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name" required> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name" required> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address" required> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> ... This simply enables HTML5 browser validation in most modern browsers. Adding the following JavaScript line will activate the jQuery Validation plugin. $('form').validate(); validate() is a special method exposed by the jQuery Validation plugin and it can help customise our validation further. We will discuss this method in more details later in this article. Here is it in action: See the Pen jQuery Validation – Part I by Jabran Rafique (@jabranr) on CodePen. Submitting the form will validate the form and for empty fields it will return an error message. The generic error messages (This field is required) you see are set as default messages by the plugin. These can easily be changed by adding a data-msg-required attribute to the form field element, as shown in the following example: ... <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name" required data-msg-required="First name is required."> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name" required data-msg-required="Last name is required."> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address" required data-msg-required="Email is required."> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> ... Here it is in action: See the Pen jQuery Validation – Part II by Jabran Rafique (@jabranr) on CodePen. Similarly we can add different types of validation to our form fields using attributes such as minlength, maxlength, and so on, as shown in the following examples: ... <div> <label for="phone">Phone:</label> <!-- With default error messages --> <input type="text" id="phone" name="phone" minlength="11" maxlength="15"> <!-- With custom error messages --> <input type="text" id="phone" name="phone" minlength="11" maxlength="15" data-msg-min="Enter minimum of 11 digits." data-msg-max="Enter maximum of 15 digits."> </div> ... The alternative way of setting rules and messages is to pass these settings as arguments to the validate() method. Here are the above examples using the validate() method. Here is the HTML for the form with no custom validation messages and optionally no validation attributes: ... <form method="post"> <div> <label for="first-name">First Name:</label> <input type="text" id="first-name" name="first_name"> </div> <div> <label for="last-name">Last Name:</label> <input type="text" id="last-name" name="last_name"> </div> <div> <label for="email-address">Email:</label> <input type="text" id="email-address" name="email_address"> </div> <div> <button type="submit" id="submit-cta">Submit</button> </div> </form> ... Here is the JavaScript that enables validation and set custom validation messages for this form: $('form').validate({ rules: { 'first_name': required, 'last_name': required, 'email_address': { required: true, email: true } }, messages: { 'first_name': 'First name is required.', 'last_name': 'Last name is required.', 'email_address': { required: 'Email is required.', email: 'A valid email is required.' } } }); As you may have noticed a new validation constraint in the above code that is email: true. This enables checks for a valid email address. There are many more built-in constraints in the jQuery validation plugin. You can find those in the official documentation. validate() has other properties that can be updated to customised the behavior of the jQuery Validation plugin completely. Some of those are the following: errorClass: Set custom CSS class for error message element validClass: Set custom CSS class for a validated element errorPlacement(): Define where to put error messages highlight(): Define method to highlight error/validated states unhighlight(): Define method to unhighlight error/validated states The default plugin methods can also be overidden for custom implementations using Validator object. Additional Methods Sooner or later in every other project, there arises a requirement for some advanced validation. Let's think of date of birth fields. We want to validate all three fields as one and it is not possible to do so using the default methods of this plugin. There are optional additional methods to include if such a requirement appears. The additional methods can either be a script included using CDN link or each of them can be included individually when using bower installation. Contribute Contributions to any open source project make it robust and more useful with bugfixes and new features. Just like any other open source project, the jQuery Validation plugin also welcomes contributions. To contribute to the jQuery Validation plugin project just head over to the GitHub project and fork the repository to work on an existing issue or by adding or start a new one. Don't forget to read the contribution guidelines before starting. All of the demos in this tutorial can be found at CodePen as Collection. Hope this quick get started guide will make it easier to use the jQuery Validation plugin in your next project for better form validation. Author Jabran Rafique is a London-based web engineer. He currently works as a front-end web developer at Rated People. He has a master’s in computer science from Staffordshire University and more than 6 years of professional experience in web systems. He has also served as aregional lead and advocate at Google Map Makersince 2008, where he contributed to building digital maps in order to make them available to millions of people worldwide as well as organize and speak at international events. He writes on his website/blog about different things, and shares code at GitHub and thoughts on Twitter.
Read more
  • 0
  • 1
  • 11862

article-image-how-create-breakout-game-godot-engine-part-2
George Marques
23 Nov 2016
8 min read
Save for later

How to create a Breakout game with Godot Engine – Part 2

George Marques
23 Nov 2016
8 min read
In part one of this article you learned how to set up a project and create a basic scene with input and scripting. By now you should grasp the basic concepts of the Godot Engine, such as nodes and scenes. Here we're going to complete the game up to a playable demo. Game scene Let's create new scene to hold the game itself. Click on the menu Scene > New Scene and add a Node2D as its root. You may feel tempted to resize this node to occupy the scene, but you shouldn't. If you resize it you'll be changing the scale and position, which will be reflected on child nodes. We want the position and scale both to be (0, 0). Rename the root node to Game and save the scene as game.tscn. Go to the Project Settings, and in the Application section, set this as the main_scene option. This will make the Game scene run when the game starts. Drag the paddle.tscn file from the FileSystem dock and drop it over the root Game node. This will create a new instance of the paddle scene. It's also possible to click on the chain icon on the Scene dock and select a scene to instance. You can then move the instanced paddle to the bottom of the screen where it should stay in the game (use the guides in the editor as reference). Play the project and you can then move the paddle with your keyboard. If you find the movement too slow or too fast, you can select the Paddle node and adjust the Speed value on the Inspector because it's an exported script variable. This is a great way to tweak the gameplay without touching the code. It also allows you to put multiple paddles in the game, each with its own speed if you wish. To make this better, you can click the Debug Options button (the last one on the top center bar) and activate Sync Scene Changes. This will reflect the changes on the editor in the running game, so you can set the speed without having to stop and play again. The ball Let's create a moving object to interact with. Make a new scene and add a RigidBody2D as the root. Rename it to Ball and save the scene as ball.tscn. The rigid body can be moved by the physics engine and interact with other physical objects, like the static body of the paddle. Add a Sprite as a child node and set the following image as its texture: Ball Now add a CollisionShape2D as a child of the Ball. Set its shape to new CircleShape2D and adjust the radius to cover the image. We need to adjust some of the Ball properties to behave in an appropriate way for this game. Set the Mode property to Character to avoid rotation. Set the Gravity Scale to 0 so it doesn't fall. Set the Friction to 0 and the Damp Override > Linear to 0 in to avoid the loss of momentum. Finally, set the Bounce property to 1, as we want the ball to completely bounce when touching the paddle. With that done, add the following script to the ball, so it starts moving when the scene plays: extends RigidBody2D # Export the ball speed to be changed in the editor export var ball_speed = 150.0 func _ready(): # Apply the initial impulse to the ball so it starts moving # It uses a bit of vector math to make the speed consistent apply_impulse(Vector2(), Vector2(1, 1).normalized() * ball_speed) Walls Going back to the Game scene, instance the ball as a child of the root node. We're going to add the walls so the ball doesn't get lost in the world. Add a Node2D as a child of the root and rename it to Walls. This will be the root for the wall nodes, to keep things organized. As a child of that, add four StaticBody2D nodes, each with its own rectangular collision shape to cover the borders of the screen. You'll end up with something like the following: Walls By now you can play the game a little bit and use the paddle to deflect the ball or leave it to bounce on the bottom wall. Bricks The last part of this puzzle left is the bricks. Create a new scene, add a StaticBody2D as the root and rename it to Brick. Save the scene as brick.tscn. Add a Sprite as its child and set the texture to the following image: Brick Add a CollisionShape2D and set its shape to rectangle, making it cover the whole image. Now add the following script to the root to make a little bit of magic: # Tool keyword makes the script run in editor # In this case you can see the color change in the editor itself tool extends StaticBody2D # Export the color variable and a setter function to pass it to the sprite export (Color) var brick_color = Color(1, 1, 1) setget set_color func _ready(): # Set the color when first entering the tree set_color(brick_color) # This is a setter function and will be called whenever the brick_color variable is set func set_color(color): brick_color = color # We make sure the node is inside the tree otherwise it cannot access its children if is_inside_tree(): # Change the modulate property of the sprite to change its color get_node("Sprite").set_modulate(color) This will allow to set the color of the brick using the Inspector, removing the need to make a scene for each brick color. To make it easier to see, you can click the eye icon besides the CollisionShape2D to hide it. Hide CollisionShape2D The last thing to be done is to make the brick disappear when touched by the ball. Using the Node dock, add the group brick to the root Brick node. Then go back to the Ball scene and, again using the Node dock, but this time in the Signals section, double-click the body_enter signal. Click the Connect button with the default values. This will open the script editor with a new function. Replace it with this: func _on_Ball_body_enter( body ): # If the body just touched in member of the "brick" group if body.is_in_group("brick"): # Mark it for deletion in the next idle frame body.queue_free() Using the Inspector, change the Ball node to enable the Contact Monitor property and increase the Contacts Reported to 1. This will make sure the signal is sent when the ball touches something. Level Make a new scene for the level. Add a Node2D as the root, rename it to Level 1 and save the scene as level1.tscn. Now instance a brick in the scene. Position it anywhere, set a color using the Inspector and then duplicate it. You can repeat this process to make the level look the way you want. Using the Edit menu you can set a grid with snapping to make it easier to position the bricks. Then go back to the Game scene, instance the level there as a child of the root. Play the game and you will finally see the ball bouncing around and destroying the bricks it touches. Breakout Game Going further This is just a basic tutorial showing some of the fundamental aspects of the Godot Engine. The Node and Scene system, physics bodies, scripting, signals, and groups are very useful concepts but not all that Godot has to offer. Once you get acquainted with those, it's easy to learn other functions of the engine. The finished game in this tutorial is just bare bones. There are many things you can do, such as adding a start menu, progressing the levels as they are finished and detecting when the player loses. Thankfully, Godot makes all those things very easy and it should not be much effort to make this a complete game. Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 0
  • 15427

article-image-designing-user-interface
Packt
23 Nov 2016
7 min read
Save for later

Designing a User Interface

Packt
23 Nov 2016
7 min read
In this article by Marcin Jamro, the author of the book Windows Application Development Cookbook, we will see how to add a button in your application. (For more resources related to this topic, see here.) Introduction You know how to start your adventure by developing universal applications for smartphones, tablets, and desktops running on the Windows 10 operating system. In the next step, it is crucial to get to know how to design particular pages within the application to provide the user with a convenient user interface that works smoothly on screens with various resolutions. Fortunately, designing the user interface is really simple using the XAML language, as well as Microsoft Visual Studio Community 2015. A designer can use a set of predefined controls, such as textboxes, checkboxes, images, or buttons. What's more, one can easily arrange controls in various variants, either vertically, horizontally, or in a grid. This is not all; developers could prepare their own controls as well. Such controls could be configured and placed on many pages within the application. It is also possible to prepare dedicated versions of particular pages for various types of devices, such as smartphones and desktops. You have already learned how to place a new control on a page by dragging it from the Toolbox window. In this article, you will see how to add a control as well as how to programmatically handle controls. Thus, some controls could either change their appearance, or the new controls could be added to the page when some specific conditions are met. Another important question is how to provide the user with a consistent user interface within the whole application. While developing solutions for the Windows 10 operating system, such a task could be easily accomplished by applying styles. In this article, you will learn how to specify both page-limited and application-limited styles that could be applied to either particular controls or to all the controls of a given type. At the end, you could ask yourself a simple question, "Why should I restrict access to my new awesome application only to people who know a particular language in which the user interface is prepared?" You should not! And in this article, you will also learn how to localize content and present it in various languages. Of course, the localization will use additional resource files, so translations could be prepared not by a developer, but by a specialist who knows the given language well. Adding a button When developing applications, you can use a set of predefined controls among which a button exists. It allows you to handle the event of pressing the button by a user. Of course, the appearance of the button could be easily adjusted, for instance, by choosing a proper background or border, as you could see in this recipe. The button can present textual content; however, it can also be adjusted to the user's needs, for instance, by choosing a proper color or font size. This is not all because the content shown on the button could not be only textual. For instance, you can prepare a button that presents an image instead of a text, a text over an image, or a text located next to the small icon that visually informs about the operation. Such modifications are presented in the following part of this recipe as well. Getting ready To step through this recipe, you only need the automatically generated project. How to do it… Add a button to the page by modifying the content of the MainPage.xaml file, as follows: <Page (...)> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Button Content="Click me!" Foreground="#0a0a0a" FontWeight="SemiBold" FontSize="20" FontStyle="Italic" Background="LightBlue" BorderBrush="RoyalBlue" BorderThickness="5" Padding="20 10" VerticalAlignment="Center" HorizontalAlignment="Center" /> </Grid> </Page> Generate a method for handling the event of clicking the button by pressing the button (either in a graphical designer or in the XAML code) and double-clicking on the Click field in the Properties window with the Event handlers for the selected element option (the lightning icon) selected. The automatically generated method is as follows: private void Button_Click(object sender, RoutedEventArgs e) { } How it works… In the preceding example, the Button control is placed within a grid. It is centered both vertically and horizontally, as specified by the VerticalAlignment and HorizontalAlignment properties that are set to Center. The background color (Background) is set to LightBlue. The border is specified by two properties, namely BorderBrush and BorderThickness. The first property chooses its color (RoyalBlue), while the other represents its thickness (5 pixels). What's more, the padding (Padding) is set to 20 pixels on the left- and right-hand side and 10 pixels at the top and bottom. The button presents the Click me! text defined as a value of the Content property. The text is shown in the color #0a0a0a with semi-bold italic font with size 20, as specified by the Foreground, FontWeight, FontStyle, and FontSize properties, respectively. If you run the application on a local machine, you should see the following result: It is worth mentioning that the IDE supports a live preview of the designed page. So, you can modify the values of particular properties and have real-time feedback regarding the target appearance directly in the graphical designer. It is a really great feature that does not require you to run the application to see an impact of each introduced change. There's more… As already mentioned, even the Button control has many advanced features. For example, you could place an image instead of a text, present a text over an image, or show an icon next to the text. Such scenarios are presented and explained now. First, let's focus on replacing the textual content with an image by modifying the XAML code that represents the Button control, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Image Source="/Assets/Image.jpg" /> </Button> Of course, you should also add the Image.jpg file to the Assets directory. To do so, navigate to Add | Existing Item… from the context menu of the Assets node in the Solution Explorer window, shown as follows: In the Add Existing Item window, choose the Image.jpg file and click on the Add button. As you could see, the previous example uses the Image control. In this recipe, no more information about such a control is presented because it is the topic of one of the next recipes, namely Adding an image. If you run the application now, you should see a result similar to the following: The second additional example presents a button with a text over an image. To do so, let's modify the XAML code, as follows: <Button MaxWidth="300" VerticalAlignment="Center" HorizontalAlignment="Center"> <Grid> <Image Source="/Assets/Image.jpg" /> <TextBlock Text="Click me!" Foreground="White" FontWeight="Bold" FontSize="28" VerticalAlignment="Bottom" HorizontalAlignment="Center" Margin="10" /> </Grid> </Button> You'll find more information about the Grid, Image, and TextBlock controls in the next recipes, namely Arranging controls in a grid, Adding an image, and Adding a label. For this reason, the usage of such controls is not explained in the current recipe. If you run the application now, you should see a result similar to the following: As the last example, you will see a button that contains both a textual label and an icon. Such a solution could be accomplished using the StackPanel, TextBlock, and Image controls, as you could see in the following code snippet: <Button Background="#353535" VerticalAlignment="Center" HorizontalAlignment="Center" Padding="20"> <StackPanel Orientation="Horizontal"> <Image Source="/Assets/Icon.png" MaxHeight="32" /> <TextBlock Text="Accept" Foreground="White" FontSize="28" Margin="20 0 0 0" /> </StackPanel> </Button> Of course, you should not forget to add the Icon.png file to the Assets directory, as already explained in this recipe. The result should be similar to the following: Resources for Article: Further resources on this subject: Deployment and DevOps [article] Introduction to C# and .NET [article] Customizing Kernel and Boot Sequence [article]
Read more
  • 0
  • 0
  • 12388

article-image-android-game-development-unity3d
Packt
23 Nov 2016
8 min read
Save for later

Android Game Development with Unity3D

Packt
23 Nov 2016
8 min read
In this article by Wajahat Karim, author of the book Mastering Android Game Development with Unity, we will be creating addictive fun games by using a very famous game engine called Unity3D. In this article, we will cover the following topics: Game engines and Unity3D Features of Unity3D Basics of Unity game development (For more resources related to this topic, see here.) Game engines and Unity3D A game engine is a software framework designed for the creation and development of video games. Many tools and frameworks are available for game designers and developers to code a game quickly and easily without building from the ground up. As time passed by, game engines became more mature and easy for developers, with feature-rich environments. Starting from native code frameworks for Android such as AndEngine, Cocos2d-x, LibGDX, and so on, game engines started providing clean user interfaces and drag-drop functionalities to make game development easier for developers. These engines include lots of tools which are different in user interface, features, porting, and many more things; but all have one thing in common— they create video games in the end. Unity (http://unity3d.com) is a cross-platform game engine developed by Unity Technologies. It made its first public announcement at Apple Worldwide Developers Conference in 2005, and supported only game development for Mac OS, but since then it has been extended to target more than 15 platforms for desktop, mobile, and consoles. It is notable for its one-click ability to port games on multiple platforms including BlackBerry 10, Windows Phone 8, Windows, OS X, Linux, Android, iOS, Unity Web Player (including Facebook), Adobe Flash, PlayStation 3, PlayStation 4, PlayStation Vita, Xbox 360, Xbox One, Wii U, and Wii. Unity has a fantastic interface, which lets the developers manage the project really efficiently from the word go. It has a nice drag-drop functionality with connecting behavior scripts written in either C#, JavaScript (or UnityScript), or Boo to define the custom logic and functionality with visual objects quite easily. Unity has been proven quite easy to learn for new developers who are just starting out with game development. Now more largely studios have also started using , and that too for good reasons. Unity is one of those engines that provide support for both 2D and 3D games without putting developers in trouble or confusing them. Due to its popularity all over the game development industry, it has a vast collection of online tutorials, great documentation, and a very helping community of developers. Features of Unity3D Unity is a game development ecosystem comprising a powerful rendering engine, intuitive tools, rapid workflows for 2D and 3D games, all-in-one deployment support, and thousands of already created free and paid assets with a helping developer's community. The feature list includes the following: Easy workflow allowing developers to rapidly assemble scenes in an intuitive editor workspace Quality game creation like AAA visuals, high-definition audio, full-throttle action without any glitches on screen Dedicated tools for both 2D and 3D game creation with shared conventions to make it easy for developers A very unique and flexible animation system to create natural animations with very less time-consuming efforts Smooth frame rate with reliable performance on all the platforms where developers publish their games One-click ability to deploy to all platforms from desktops, browsers, and mobiles to consoles within minutes Reduces time of development by using already created reusable assets available at the huge asset store Basics of Unity game development Before delving into details of Unity3D and game development concepts, let's have a look at some of the very basics of Unity 5.0. We will go through the Unity interface, menu items, using assets, creating scenes, and publishing builds. Unity editor interface When you launch Unity 5.0 for the first time, you will be presented with an editor with a few panels on the left, right, and bottom of the screen. The following screenshot shows the editor interface when it's first launched: Fig 1.7 Unity 5 editor interface at first launch First of all, take your time to look over the editor, and become a little familiar with it. The Unity editor is divided into different small panels and views, which can be dragged to customize the workspace according to the developer/designer's needs. Unity 5 comes with some prebuilt workspace layout templates, which can be selected from the Layout drop-down menu at top-right corner of the screen, as shown in the following screenshot: Fig 1.8 Unity 5 editor layouts The layout currently displayed in the editor shown in the preceding screenshot is the Default layout. You can select these layouts, and see how the editor's interface changes, and how the different panels are placed at different positions in each layout. This book uses the 2 by 3 workspace layout for the whole game. The following figure shows the 2 by 3 workspace with the names of the views and panels highlighted: Fig 1.9 Unity 5 2 by 3 Layout with views and panel names As you can see in the preceding figure, the Unity editor contains different views and panels. Every panel and view have a specific purpose, which is described as follows: Scene view The Scene view is the whole stage for the game development, and it contains every asset in the game from a tiny point to any heavy 3D model. The Scene view is used to select and position environments, characters, enemies, the player, camera, and all other objects which can be placed on the stage for the game. All those objects which can be placed and shown in the game are called game objects. The Scene view allows developers to manipulate game objects such as selecting, scaling, rotating, deleting, moving, and so on. It also provides some controls such as navigation and transformation.  In simple words, the Scene view is the interactive sandbox for developers and designers. Game view The Game view is the final representation of how your game will look when published and deployed on the target devices, and it is rendered from the cameras of the scene. This view is connected to the play mode navigation bar in the center at the top of the whole Unity workspace. The play mode navigation bar is shown in the following: figure. Fig 1.14 Play mode bar When the game is played in the editor, this control bar gets changed to blue color. A very interesting feature of Unity is that it allows developers to pause the game and code while running, and the developers can see and change the properties, transforms, and much more at runtime, without recompiling the whole game, for quick workflow. Hierarchy view The Hierarchy view is the first point to select or handle any game object available in the scene. This contains every game object in the current scene. It is tree-type structure, which allows developers to utilize the parent and child concept on the game objects easily. The following figure shows a simple Hierarchy view: Fig 1.16 Hierarchy view Project browser panel This panel looks like a view, but it is called the Project browser panel. It is an embedded files directory in Unity, and contains all the files and folders included in the game project. The following figure shows a simple Project browser panel: Fig 1.17 Project browser panel The left side of the panel shows a hierarchal directory, while the rest of the panel shows the files, or, as they are called, assets in Unity. Unity represents these files with different icons to differentiate these according to their file types. These files can be sprite images, textures, model files, sounds, and so on. You can search any specific file by typing in the search text box. On the right side of search box, there are button controls for further filters such as animation files, audio clip files, and so on. An interesting thing about the Project browser panel is that if any file is not available in the Assets, then Unity starts looking for it on the Unity Asset Store, and presents you with the available free and paid assets. Inspector panel This is the most important panel for development in Unity. Unity structures the whole game in the form of game objects and assets. These game objects further contain components such as transform, colliders, scripts, meshes, and so on. Unity lets developers manage these components of each game object through the Inspector panel. The following figure shows a simple Inspector panel of a game object: Fig 1.18 Inspector panel These components vary in types, for example, Physics, Mesh, Effects, Audio, UI, and so on. These components can be added in any object by selecting it from the Component menu. The following figure shows the Component menu: Fig 1.19 Components menu Summary In this article, you learned about game engines, such as Unity3D, which is used to create games for Android devices. We also discussed the important features of Unity along with the basics of its development environment. Resources for Article: Further resources on this subject: The Game World [article] Customizing the Player Character [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 13145

article-image-debugging-vulkan
Packt
23 Nov 2016
16 min read
Save for later

Debugging in Vulkan

Packt
23 Nov 2016
16 min read
In this article by Parminder Singh, author of Learning Vulkan, we learn Vulkan debugging in order to avoid unpleasant mistakes. Vulkan allows you to perform debugging through validation layers. These validation layer checks are optional and can be injected into the system at runtime. Traditional graphics APIs perform validation right up front using some sort of error-checking mechanism, which is a mandatory part of the pipeline. This is indeed useful in the development phase, but actually, it is an overhead during the release stage because the validation bugs might have already been fixed at the development phase itself. Such compulsory checks cause the CPU to spend a significant amount of time in error checking. On the other hand, Vulkan is designed to offer maximum performance, where the optional validation process and debugging model play a vital role. Vulkan assumes the application has done its homework using the validation and debugging capabilities available at the development stage, and it can be trusted flawlessly at the release stage. In this article, we will learn the validation and debugging process of a Vulkan application. We will cover the following topics: Peeking into Vulkan debugging Understanding LunarG validation layers and their features Implementing debugging in Vulkan (For more resources related to this topic, see here.) Peeking into Vulkan debugging Vulkan debugging validates the application implementation. It not only surfaces the errors, but also other validations, such as proper API usage. It does so by verifying each parameter passed to it, warning about the potentially incorrect and dangerous API practices in use and reporting any performance-related warnings when the API is not used optimally. By default, debugging is disabled, and it's the application's responsibility to enable it. Debugging works only for those layers that are explicitly enabled at the instance level at the time of the instance creation (VkInstance). When debugging is enabled, it inserts itself into the call chain for the Vulkan commands the layer is interested in. For each command, the debugging visits all the enabled layers and validates them for any potential error, warning, debugging information, and so on. Debugging in Vulkan is simple. The following is an overview that describes the steps required to enable it in an application: Enable the debugging capabilities by adding the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension at the instance level. Define the set of the validation layers that are intended for debugging. For example, we are interested in the following layers at the instance and device level. For more information about these layer functionalities, refer to the next section: VK_LAYER_GOOGLE_unique_objects VK_LAYER_LUNARG_api_dump VK_LAYER_LUNARG_core_validation VK_LAYER_LUNARG_image VK_LAYER_LUNARG_object_tracker VK_LAYER_LUNARG_parameter_validation VK_LAYER_LUNARG_swapchain VK_LAYER_GOOGLE_threading The Vulkan debugging APIs are not part of the core command, which can be statically loaded by the loader. These are available in the form of extension APIs that can be retrieved at runtime and dynamically linked to the predefined function pointers. So, as the next step, the debug extension APIs vkCreateDebugReportCallbackEXT and vkDestroyDebugReportCallbackEXT are queried and linked dynamically. These are used for the creation and destruction of the debug report. Once the function pointers for the debug report are retrieved successfully, the former API (vkCreateDebugReportCallbackEXT) creates the debug report object. Vulkan returns the debug reports in a user-defined callback, which has to be linked to this API. Destroy the debug report object when debugging is no more required. Understanding LunarG validation layers and their features The LunarG Vulkan SDK supports the following layers for debugging and validation purposes. In the following points, we have described some of the layers that will help you understand the offered functionalities: VK_LAYER_GOOGLE_unique_objects: Non-dispatchable handles are not required to be unique; a driver may return the same handle for multiple objects that it considers equivalent. This behavior makes the tracking of the object difficult because it is not clear which object to reference at the time of deletion. This layer packs the Vulkan objects into a unique identifier at the time of creation and unpacks them when the application uses it. This ensures there is proper object lifetime tracking at the time of validation. As per LunarG's recommendation, this layer must be last in the chain of the validation layer, making it closer to the display driver. VK_LAYER_LUNARG_api_dump: This layer is helpful in knowing the parameter values passed to the Vulkan APIs. It prints all the data structure parameters along with their values. VK_LAYER_LUNARG_core_validation: This is used for validating and printing important pieces of information from the descriptor set, pipeline state, dynamic state, and so on. This layer tracks and validates the GPU memory, object binding, and command buffers. Also, it validates the graphics and compute pipelines. VK_LAYER_LUNARG_image: This layer can be used for validating texture formats, rendering target formats, and so on. For example, it verifies whether the requested format is supported on the device. It validates whether the image view creation parameters are reasonable for the image that the view is being created for. VK_LAYER_LUNARG_object_tracker: This keeps track of object creation along with its use and destruction, which is helpful in avoiding memory leaks. It also validates that the referenced object is properly created and is presently valid. VK_LAYER_LUNARG_parameter_validation: This validation layer ensures that all the parameters passed to the API are correct as per the specification and are up to the required expectation. It checks whether the value of a parameter is consistent and within the valid usage criteria defined in the Vulkan specification. Also, it checks whether the type field of a Vulkan control structure contains the same value that is expected for a structure of that type. VK_LAYER_LUNARG_swapchain: This layer validates the use of the WSI swapchain extensions. For example, it checks whether the WSI extension is available before its functions could be used. Also, it validates that an image index is within the number of images in a swapchain. VK_LAYER_GOOGLE_threading: This is helpful in the context of thread safety. It checks the validity of multithreaded API usage. This layer ensures the simultaneous use of objects using calls running under multiple threads. It reports threading rule violations and enforces a mutex for such calls. Also, it allows an application to continue running without actually crashing, despite the reported threading problem. VK_LAYER_LUNARG_standard_validation: This enables all the standard layers in the correct order. For more information on validation layers, visit LunarG's official website. Check out https://vulkan.lunarg.com/doc/sdk and specifically refer to the Validation layer details section for more details. Implementing debugging in Vulkan Since debugging is exposed by validation layers, most of the core implementation of the debugging will be done under the VulkanLayerAndExtension class (VulkanLED.h/.cpp). In this section, we will learn about the implementation that will help us enable the debugging process in Vulkan: The Vulkan debug facility is not part of the default core functionalities. Therefore, in order to enable debugging and access the report callback, we need to add the necessary extensions and layers: Extension: Add the VK_EXT_DEBUG_REPORT_EXTENSION_NAME extension to the instance level. This will help in exposing the Vulkan debug APIs to the application: vector<const char *> instanceExtensionNames = { . . . . // other extensios VK_EXT_DEBUG_REPORT_EXTENSION_NAME, }; Layer: Define the following layers at the instance level to allow debugging at these layers: vector<const char *> layerNames = { "VK_LAYER_GOOGLE_threading", "VK_LAYER_LUNARG_parameter_validation", "VK_LAYER_LUNARG_device_limits", "VK_LAYER_LUNARG_object_tracker", "VK_LAYER_LUNARG_image", "VK_LAYER_LUNARG_core_validation", "VK_LAYER_LUNARG_swapchain", “VK_LAYER_GOOGLE_unique_objects” }; In addition to the enabled validation layers, the LunarG SDK provides a special layer called VK_LAYER_LUNARG_standard_validation. This enables basic validation in the correct order as mentioned here. Also, this built-in metadata layer loads a standard set of validation layers in the optimal order. It is a good choice if you are not very specific when it comes to a layer. a) VK_LAYER_GOOGLE_threading b) VK_LAYER_LUNARG_parameter_validation c) VK_LAYER_LUNARG_object_tracker d) VK_LAYER_LUNARG_image e) VK_LAYER_LUNARG_core_validation f) VK_LAYER_LUNARG_swapchain g) VK_LAYER_GOOGLE_unique_objects These layers are then supplied to the vkCreateInstance() API to enable them: VulkanApplication* appObj = VulkanApplication::GetInstance(); appObj->createVulkanInstance(layerNames, instanceExtensionNames, title); // VulkanInstance::createInstance() VkResult VulkanInstance::createInstance(vector<const char *>& layers, std::vector<const char *>& extensionNames, char const*const appName) { . . . VkInstanceCreateInfo instInfo = {}; // Specify the list of layer name to be enabled. instInfo.enabledLayerCount = layers.size(); instInfo.ppEnabledLayerNames = layers.data(); // Specify the list of extensions to // be used in the application. instInfo.enabledExtensionCount = extensionNames.size(); instInfo.ppEnabledExtensionNames = extensionNames.data(); . . . vkCreateInstance(&instInfo, NULL, &instance); } The validation layer is very specific to the vendors and SDK version. Therefore, it is advisable to first check whether the layers are supported by the underlying implementation before passing them to the vkCreateInstance() API. This way, the application remains portable throughout when ran against another driver implementation. The areLayersSupported() is a user-defined utility function that inspects the incoming layer names against system-supported layers. The unsupported layers are informed to the application and removed from the layer names before feeding them into the system: // VulkanLED.cpp VkBool32 VulkanLayerAndExtension::areLayersSupported (vector<const char *> &layerNames) { uint32_t checkCount = layerNames.size(); uint32_t layerCount = layerPropertyList.size(); std::vector<const char*> unsupportLayerNames; for (uint32_t i = 0; i < checkCount; i++) { VkBool32 isSupported = 0; for (uint32_t j = 0; j < layerCount; j++) { if (!strcmp(layerNames[i], layerPropertyList[j]. properties.layerName)) { isSupported = 1; } } if (!isSupported) { std::cout << "No Layer support found, removed” “ from layer: "<< layerNames[i] << endl; unsupportLayerNames.push_back(layerNames[i]); } else { cout << "Layer supported: " << layerNames[i] << endl; } } for (auto i : unsupportLayerNames) { auto it = std::find(layerNames.begin(), layerNames.end(), i); if (it != layerNames.end()) layerNames.erase(it); } return true; } The debug report is created using the vkCreateDebugReportCallbackEXT API. This API is not a part of Vulkan's core commands; therefore, the loader is unable to link it statically. If you try to access it in the following manner, you will get an undefined symbol reference error: vkCreateDebugReportCallbackEXT(instance, NULL, NULL, NULL); All the debug-related APIs need to be queried using the vkGetInstanceProcAddr() API and linked dynamically. The retrieved API reference is stored in a corresponding function pointer called PFN_vkCreateDebugReportCallbackEXT. The VulkanLayerAndExtension::createDebugReportCallback() function retrieves the create and destroy debug APIs, as shown in the following implementation: /********* VulkanLED.h *********/ // Declaration of the create and destroy function pointers PFN_vkCreateDebugReportCallbackEXT dbgCreateDebugReportCallback; PFN_vkDestroyDebugReportCallbackEXT dbgDestroyDebugReportCallback; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Get vkCreateDebugReportCallbackEXT API dbgCreateDebugReportCallback=(PFN_vkCreateDebugReportCallbackEXT) vkGetInstanceProcAddr(*instance,"vkCreateDebugReportCallbackEXT"); if (!dbgCreateDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkCreateDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } // Get vkDestroyDebugReportCallbackEXT API dbgDestroyDebugReportCallback= (PFN_vkDestroyDebugReportCallbackEXT)vkGetInstanceProcAddr (*instance, "vkDestroyDebugReportCallbackEXT"); if (!dbgDestroyDebugReportCallback) { std::cout << "Error: GetInstanceProcAddr unable to locate vkDestroyDebugReportCallbackEXT function.n"; return VK_ERROR_INITIALIZATION_FAILED; } . . . } The vkGetInstanceProcAddr() API obtains the instance-level extensions dynamically; these extensions are not exposed statically on a platform and need to be linked through this API dynamically. The following is the signature of this API: PFN_vkVoidFunction vkGetInstanceProcAddr( VkInstance instance, const char* name); The following table describes the API fields: Parameters Description instance This is a VkInstance variable. If this variable is NULL, then the name must be one of these: vkEnumerateInstanceExtensionProperties, vkEnumerateInstanceLayerProperties, or vkCreateInstance. name This is the name of the API that needs to be queried for dynamic linking.   Using the dbgCreateDebugReportCallback()function pointer, create the debugging report object and store the handle in debugReportCallback. The second parameter of the API accepts a VkDebugReportCallbackCreateInfoEXT control structure. This data structure defines the behavior of the debugging, such as what should the debug information include—errors, general warnings, information, performance-related warning, debug information, and so on. In addition, it also takes the reference of a user-defined function (debugFunction); this helps filter and print the debugging information once it is retrieved from the system. Here's the syntax for creating the debugging report: struct VkDebugReportCallbackCreateInfoEXT { VkStructureType type; const void* next; VkDebugReportFlagsEXT flags; PFN_vkDebugReportCallbackEXT fnCallback; void* userData; }; The following table describes the purpose of the mentioned API fields: Parameters Description type This is the type information of this control structure. It must be specified as VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT. flags This is to define the kind of debugging information to be retrieved when debugging is on; the next table defines these flags. fnCallback This field refers to the function that filters and displays the debug messages. The VkDebugReportFlagBitsEXT control structure can exhibit a bitwise combination of the following flag values: Insert table here The createDebugReportCallback function implements the creation of the debug report. First, it creates the VulkanLayerAndExtension control structure object and fills it with relevant information. This primarily includes two things: first, assigning a user-defined function (pfnCallback) that will print the debug information received from the system (see the next point), and second, assigning the debugging flag (flags) in which the programmer is interested: /********* VulkanLED.h *********/ // Handle of the debug report callback VkDebugReportCallbackEXT debugReportCallback; // Debug report callback create information control structure VkDebugReportCallbackCreateInfoEXT dbgReportCreateInfo = {}; /********* VulkanLED.cpp *********/ VulkanLayerAndExtension::createDebugReportCallback(){ . . . // Define the debug report control structure, // provide the reference of 'debugFunction', // this function prints the debug information on the console. dbgReportCreateInfo.sType = VK_STRUCTURE_TYPE_DEBUG_REPORT_CREATE_INFO_EXT; dbgReportCreateInfo.pfnCallback = debugFunction; dbgReportCreateInfo.pUserData = NULL; dbgReportCreateInfo.pNext = NULL; dbgReportCreateInfo.flags = VK_DEBUG_REPORT_WARNING_BIT_EXT | VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT | VK_DEBUG_REPORT_ERROR_BIT_EXT | VK_DEBUG_REPORT_DEBUG_BIT_EXT; // Create the debug report callback and store the handle // into 'debugReportCallback' result = dbgCreateDebugReportCallback (*instance, &dbgReportCreateInfo, NULL, &debugReportCallback); if (result == VK_SUCCESS) { cout << "Debug report callback object created successfullyn"; } return result; } Define the debugFunction() function that prints the retrieved debug information in a user-friendly way. It describes the type of debug information along with the reported message: VKAPI_ATTR VkBool32 VKAPI_CALL VulkanLayerAndExtension::debugFunction( VkFlags msgFlags, VkDebugReportObjectTypeEXT objType, uint64_t srcObject, size_t location, int32_t msgCode, const char *pLayerPrefix, const char *pMsg, void *pUserData){ if (msgFlags & VK_DEBUG_REPORT_ERROR_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] ERROR: [" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_WARNING_BIT_EXT) { std::cout << "[VK_DEBUG_REPORT] WARNING: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_INFORMATION_BIT_EXT) { std::cout<<"[VK_DEBUG_REPORT] INFORMATION:[" <<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if(msgFlags& VK_DEBUG_REPORT_PERFORMANCE_WARNING_BIT_EXT){ cout <<"[VK_DEBUG_REPORT] PERFORMANCE: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else if (msgFlags & VK_DEBUG_REPORT_DEBUG_BIT_EXT) { cout << "[VK_DEBUG_REPORT] DEBUG: ["<<layerPrefix<<"] Code" << msgCode << ":" << msg << std::endl; } else { return VK_FALSE; } return VK_SUCCESS; } The following table describes the various fields from the debugFunction()callback: Parameters Description msgFlags This specifies the type of debugging event that has triggered the call, for example, an error, warning, performance warning, and so on. objType This is the type object that is manipulated by the triggering call. srcObject This is the handle of the object that's being created or manipulated by the triggered call. location This refers to the place of the code describing the event. msgCode This refers to the message code. layerPrefix This is the layer responsible for triggering the debug event. msg This field contains the debug message text. userData Any application-specific user data is specified to the callback using this field.  The debugFunction callback has a Boolean return value. The true return value indicates the continuation of the command chain to subsequent validation layers even after an error is occurred. However, the false value indicates the validation layer to abort the execution when an error occurs. It is advisable to stop the execution at the very first error. Having an error itself indicates that something has occurred unexpectedly; letting the system run in these circumstances may lead to undefined results or further errors, which could be completely senseless sometimes. In the latter case, where the execution is aborted, it provides a better chance for the developer to concentrate and fix the reported error. In contrast, it may be cumbersome in the former approach, where the system throws a bunch of errors, leaving the developers in a confused state sometimes. In order to enable debugging at vkCreateInstance, provide dbgReportCreateInfo to the VkInstanceCreateInfo’spNext field: VkInstanceCreateInfo instInfo = {}; . . . instInfo.pNext = &layerExtension.dbgReportCreateInfo; vkCreateInstance(&instInfo, NULL, &instance); Finally, once the debug is no longer in use, destroy the debug callback object: void VulkanLayerAndExtension::destroyDebugReportCallback(){ VulkanApplication* appObj = VulkanApplication::GetInstance(); dbgDestroyDebugReportCallback(instance,debugReportCallback,NULL); } The following is the output from the implemented debug report. Your output may differ from this based on the GPU vendor and SDK provider. Also, the explanation of the errors or warnings reported are very specific to the SDK itself. But at a higher level, the specification will hold; this means you can expect to see a debug report with a warning, information, debugging help, and so on, based on the debugging flag you have turned on. Summary This article was short, precise, and full of practical implementations. Working on Vulkan without debugging capabilities is like shooting in the dark. We know very well that Vulkan demands an appreciable amount of programming and developers make mistakes for obvious reasons; they are humans after all. We learn from our mistakes, and debugging allows us to find and correct these errors. It also provides insightful information to build quality products. Let's do a quick recap. We learned the Vulkan debugging process. We looked at the various LunarG validation layers and understood the roles and responsibilities offered by each one of them. Next, we added a few selected validation layers that we were interested to debug. We also added the debug extension that exposes the debugging capabilities; without this, the API's definition could not be dynamically linked to the application. Then, we implemented the Vulkan create debug report callback and linked it to our debug reporting callback; this callback decorates the captured debug report in a user-friendly and presentable fashion. Finally, we implemented the API to destroy the debugging report callback object. Resources for Article: Further resources on this subject: Get your Apps Ready for Android N [article] Multithreading with Qt [article] Manage Security in Excel [article]
Read more
  • 0
  • 0
  • 35623
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-create-breakout-game-godot-engine-part-1
George Marques
22 Nov 2016
8 min read
Save for later

How to create a Breakout game with Godot Engine – Part 1

George Marques
22 Nov 2016
8 min read
The Godot Engine is a piece of open source software designed to help you make any kind of game. It possesses a great number of features to ease the work flow of game development. This two-part article will cover some basic features of the engine—such as physics and scripting—by showing you how to make a game with the mechanics of the classic Breakout. To install Godot, you can download it from the website and extract it in your place of preference. You can also install it through Steam. While the latter has a larger download, it has all the demos and export templates already installed. Setting up the project When you first open Godot, you are presented with the Project Manager window. In here, you can create new or import existing projects. It's possible to run a project directly from here without opening the editor itself. The Templates tab shows the available projects in the online Asset Library where you can find and download community-made content. Note that there might also be a console window, which shows some messages of what's happening on the engine, like warnings and error messages. This window must remain open but it's also helpful for debugging. It will not show up in your final exported version of the game. Project Manager To start creating our game, let's click on the New Project button. Using the dialog interface, navigate to where you want to place it, and then create a new folder just for the project. Select it and choose a name for the project ("Breakout" may be a good choice). Once you do that, the new project will be shown at the top of the list. You can double-click to open it in the editor. Creating the paddle You will see first the main screen of the Godot editor. I rearranged the docks to suit my preferences, but you can leave the default one or change it to something you like. If you click on the Settings button at the top-right corner, you can save and load layouts. Godot uses a system in which every object is a Node. A Scene is just a tree of nodes and can also be used as a "prefab", like other engines call it. Every scene can be instanced as a part of another scene. This helps in dividing the project and reusing the work. This is all explained in the documentation and you can consult it if you have any doubt. Main screen Now we are going to create a scene for the paddle, which can later be instanced on the game scene. I like to start with an object that can be controlled by the player so that we can start to feel what the interactivity looks like. On the Scene dock, click on the "+" (plus) button to add a new Node (pressing Ctrl + A will also work). You'll be presented with a large collection of Nodes to choose from, each with its own behavior. For the paddle, choose a StaticBody2D. The search field can help you find it easily. This will be root of the scene. Remember that a scene is a tree of nodes, so it needs a "root" to be its main anchor point. You may wonder why we chose a static body for the paddle since it will move. The reason for using this kind of node is that we don't want it to be moved by physical interaction. When the ball hits the paddle, we want it to be kept in the same place. We will move it only through scripting. Select the node you just added in the Scene dock and rename it to Paddle. Save the scene as paddle.tscn. Saving often is good to avoid losing your work. Add a new node of the type Sprite (not Sprite3D, since this is a 2D game). This is now a child of the root node in the tree. The sprite will serve as the image of the paddle. You can use the following image in it: Paddle Save the image in the project folder and use the FileSystem dock in the Godot editor to drag and drop into the Texture property on the Inspector dock. Any property that accepts a file can be set with drag and drop. Now the Static Body needs a collision area so that it can tell where other physics objects (like the ball) will collide. To do this, select the Paddle root node in the Scene dock and add a child node of type CollisionShape2D to it. The warning icon is there because we didn't set a shape to it yet, so let's do that now. On the Inspector dock, set the Shape property to a New RectangleShape2D. You can set the shape extents visually using the editor. Or you can click on the ">" button just in front of the Shape property on the Inspector to edit its properties and set the extents to (100, 15) if you're using the provided image. This is the half-extent, so the rectangle will be doubled in each dimension based on what you set here. User input While our paddle is mostly done, it still isn't controllable. Before we delve into the scripting world, let's set up the Input Map. This is a Godot functionality that allows us to abstract user input into named actions. You can later modify the keys needed to move the player later without changing the code. It also allows you to use multiple keys and joystick buttons for a single action, making the game work on keyboard and gamepad seamlessly. Click on the Project Settings option under the Scene menu in the top left of the window. There you can see the Input Map tab. There are some predefined actions, which are needed by UI controls. Add the actions move_left and move_right that we will use to move the paddle. Then map the left and right arrow keys of the keyboard to them. You can also add a mapping to the D-pad left and right buttons if you want to use a joystick. Close this window when done. Now we're ready to do some coding. Right-click on the Paddle root node in the Scene dock and select the option Add Script from the context menu. The "Create Node Script" dialog will appear and you can use the default settings, which will create a new script file with the same name as the scene (with a different extension). Godot has its own script language called "GDScript", which has a syntax a bit like Python and is quite easy to learn if you are familiar with programming. You can use the following code on the Paddle script: extends StaticBody2D # Paddle speed in pixels per second export var speed = 150.0 # The "export" keyword allows you to edit this value from the Inspector # Holds the limits off the screen var left_limit = 0 var right_limit = 0 # This function is called when this node enters the game tree # It is useful for initialization code func _ready(): # Enable the processing function for this node set_process(true) # Set the limits to the paddle based on the screen size left_limit = get_viewport_rect().pos.x + (get_node("Sprite").get_texture().get_width() / 2) right_limit = get_viewport_rect().pos.x + get_viewport_rect().size.x - (get_node("Sprite").get_texture().get_width() / 2) # The processing function func _process(delta): var direction = 0 if Input.is_action_pressed("move_left"): # If the player is pressing the left arrow, move to the left # which means going in the negative direction of the X axis direction = -1 elif Input.is_action_pressed("move_right"): # Same as above, but this time we go in the positive direction direction = 1 # Create a movement vector var movement = Vector2(direction * speed * delta, 0) # Move the paddle using vector arithmetic set_pos(get_pos() + movement) # Here we clamp the paddle position to not go off the screen if get_pos().x < left_limit: set_pos(Vector2(left_limit, get_pos().y)) elif get_pos().x > right_limit: set_pos(Vector2(right_limit, get_pos().y)) If you play the scene (using the top center bar or pressing F6), you can see the paddle and move it with the keyboard. You may find it too slow, but this will be covered in part two of this article when we set up the game scene. Up next You now have a project set up and a paddle on the screen that can be controlled by the player. You also have some understanding of how the Godot Engine operates with its nodes, scenes, and scripts. In part two, you will learn how to add the ball and the destroyable bricks. About the Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 1
  • 15004

article-image-installing-vcenter-site-recovery-manager-61
Packt
22 Nov 2016
3 min read
Save for later

Installing vCenter Site Recovery Manager 6.1

Packt
22 Nov 2016
3 min read
In this article by Abhilash G B, the author of the Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager - Second Editon, we will learn about Site Recovery Manager and its architecture. (For more resources related to this topic, see here.) What is Site Recovery Manager? vCenter Site Recovery Manager (SRM) is an orchestration software that is used to automate disaster recovery testing and failover. It can be configured to leverage either vSphere replication or a supported array-based replication. With SRM, you can create protection groups and run recovery plans against them. These recovery plans can then be used to test the Disaster Recovery (DR) setup, perform a planned failover, or be initiated during a DR. SRM is a not a product that performs an automatic failover, which means there is no intelligence built into SRM that would detect a disaster/outage and cause failover of the virtual machines (VMs). The DR process should be manually initiated. Hence, it is not a high-availability solution either, but purely a tool that orchestrates a recovery plan. The SRM architecture vCenter SRM is not a tool that works on its own. It needs to talk to other components in your vSphere environment. The following are the components that will be involved in an SRM-protected environment: SRM requires both the protected and the recovery sites to be managed by separate instances of vCenter Server. It also requires an SRM Instance at both the sites. SRM now uses PSC as an intermediary to fetch vCenter information. The following are the possible multiple topologies: SRM as a solution cannot work on its own. This is because it is only an orchestration tool and does not include a replication engine. However, it can leverage either a supported array-based replication or VMware's proprietary replication engine vSphere Replication. Array manager Each SRM instance needs to be configured with an array manager for it to communicate with the storage array. The Array Manager will detect the storage array using the information you supply to connect to the array. Before even you could add an array manager, you will need to install an array specific Storage Replication Adapter (SRA). This is because, the Array Manager uses the SRA installed to collect the replication information from the array: Storage Replication Adapter (SRA) The SRA is a storage vendor component that makes SRM aware of the replication configuration at the array. SRM leverages the SRA's ability to gather information regarding the replicated volumes and direction of the replication from the array. SRM also uses the SRA for the following functions: Test Failover Recovery Reprotect It is important to understand that SRM requires the SRA to be installed for all of its functions leveraging array-based replication. When all these components are put together, a site protected by SRM would look as depicted in the following figure: SRM conceptually assumes that both the protected and the recovery sites are geographically separated, but such a separation is not mandatory. You can use SRM to protect a chassis of servers and have another chassis in the same data center as the recovery site. Summary In this article, we learned what VMware vCenter Site Recovery Manager is and also its architecture. Resources for Article: Further resources on this subject: Virtualization [article] VM, It Is Not What You Think! [article] The importance of Hyper-V Security [article]
Read more
  • 0
  • 0
  • 2194

article-image-all-about-protocol
Packt
22 Nov 2016
19 min read
Save for later

All About the Protocol

Packt
22 Nov 2016
19 min read
In this article, Jon Hoffman, the author of the book Swift 3 Protocol Oriented Programming - Second Edition, talks about coming from an object-oriented background, I am very familiar with protocols (or interfaces, as they are known in other object-oriented languages). However, prior to Apple introducing protocol-oriented programming, protocols, or interfaces, were rarely the focal point of my application designs, unless I was working with an Open Service Gateway Initiative (OSGi) based project. When I designed an application in an object-oriented way, I always began the design with the objects. The protocols or interfaces were then used where they were appropriate, mainly for polymorphism when a class hierarchy did not make sense. Now, all that has changed, and with protocol-oriented programming, the protocol has been elevated to the focal point of our application design. (For more resources related to this topic, see here.) In this article you will learn the following: How to define property and method requirements within a protocol How to use protocol inheritance and composition How to use a protocol as a type What polymorphism is When we are designing an application in an object-oriented way, we begin the design by focusing on the objects and how they interact. The object is a data structure that contains information about the attributes of the object in the form of properties, and the actions performed by or to the object in the form of methods. We cannot create an object without a blueprint that tells the application what attributes and actions to expect from the object. In most object-oriented languages, this blueprint comes in the form of a class. A class is a construct that allows us to encapsulate the properties and actions of an object into a single type. Most object-oriented programming languages contain an interface type. This interface is a type that contains method and property signatures, but does not contain any implementation details. An interface can be considered a contract where any type that conforms to the interface must implement the required functionality defined within it. Interfaces in most object-oriented languages are primarily used as a way to achieve polymorphism. There are some frameworks, such as OSGi, that use interfaces extensively; however, in most object-oriented designs, the interface takes a back seat to the class and class hierarchy. Designing an application in a protocol-oriented way is significantly different from designing it in an object-oriented way. As we stated earlier, object-oriented design begins with the objects and the interaction between the objects, while protocol-oriented design begins with the protocol. While protocol-oriented design is about so much more than just the protocol, we can think of the protocol as the backbone of protocol-oriented programming. After all, it would be pretty hard to have protocol-oriented programming without the protocol. A protocol in Swift is similar to interfaces in object-oriented languages, where the protocol acts as a contract that defines the methods, properties, and other requirements needed by our types to perform their tasks. We say that the protocol acts as a contract because any type that adopts, or conforms, to the protocol promises to implement the requirements defined by the protocol. Any class, structure, or enumeration can conform to a protocol. A type cannot conform to a protocol unless it implements all required functionality defined within the protocol. If a type adopts a protocol and it does not implement all functionality defined by the protocol, we will get a compile time error and the project will not compile. Most modern object-oriented programming languages implement their standard library with a class hierarchy; however, the basis of Swift's standard library is the protocol (https://developer.apple.com/library/prerelease/ios/documentation/General/Reference/SwiftStandardLibraryReference/index.html). Therefore, not only does Apple recommend that we use the protocol-oriented programming paradigm in our applications, but they also use it in the Swift standard library. With the protocol being the basis of the Swift standard library and also the backbone of the protocol-oriented programming paradigm, it is very important that we fully understand what the protocol is and how we can use it. In this article, we will go over the basic usage of the protocol, which will include the syntax for defining the protocol, how to define requirements in a protocol, and how to make our types conform to a given protocol. Protocol syntax In this section, we will look at how to define a protocol, define requirements within a protocol, and specify that a type conforms to a protocol. Defining a protocol The syntax we use to define a protocol is very similar to the syntax used to define a class, structure, or enumeration. The following example shows the syntax used to define a protocol: protocol MyProtocol { //protocol definition here } To define the protocol, we use the protocol keyword followed by the name of the protocol. We then put the requirements, which our protocol defines, between curly brackets. Custom types can state that they conform to a particular protocol by placing the name of the protocol after the type's name, separated by a colon. The following example shows how we would state that the MyStruct structure conforms to the MyProtocol protocol: struct MyStruct: MyProtocol { //structure implementation here } A type can also conform to multiple protocols. We list the multiple protocols that the type conforms to by separating them with commas. The following example shows how we would specify that the MyStruct structure type conforms to the MyProtocol, AnotherProtocol, and ThirdProtocol protocols: struct MyStruct: MyProtocol, AnotherProtocol, ThirdProtocol { // Structure implementation here } Having a type conform to multiple protocols is a very important concept within protocol-oriented programming, as we will see later in the article. This concept is known as protocol composition. Now let's see how we would add property requirements to our protocol. Property requirements A protocol can require that the conforming types provide certain properties with specified names and types. The protocol does not say whether the property should be a stored or computed property because the implementation details are left up to the conforming types. When defining a property within a protocol, we must specify whether the property is a read-only or a read-write property by using the get and set keywords. We also need to specify the property's type since we cannot use the type inference in a protocol. Let's look at how we would define properties within a protocol by creating a protocol named FullName, as shown in the next example: protocol FullName { var firstName: String {get set} var lastName: String {get set} } In the FullName protocol, we define two properties named firstName and lastName. Both of these properties are defined as read-write properties. Any type that conforms to the FullName protocol must implement these properties. If we wanted to define the property as read-only, we would define it using only the get keyword, as shown in the following code: var readOnly: String {get} If the property is going to be a type property, then we must define it in the protocol. A type property is defined using the static keyword, as shown in the following example: static var typeProperty: String {get} Now let's see how we would add method requirements to our protocol. Method requirements A protocol can require that the conforming types provide specific methods. These methods are defined within the protocol exactly as we define them within a class or structure, but without the curly brackets and method body. We can define these methods as instance or type methods using the static keyword. Adding default values to the method's parameters is not allowed when defining the method withina protocol. Let's add a method named getFullName() to our FullName protocol: protocol FullName { var firstName: String {get set} var lastName: String {get set} func getFullName() -> String } Our fullName protocol now requires one method named getFullName() and two read-write properties named firstName and lastName. For value types, such as the structure, if we intend for a method to modify the instances that it belongs to, we must prefix the method definition with the mutating keyword. This keyword indicates that the method is allowed to modify the instance it belongs to. The following example shows how to use the mutating keyword with a method definition: mutating func changeName() If we mark a method requirement as mutating, we do not need to write the mutating keyword for that method when we adopt the protocol with a reference (class) type. The mutating keyword is only used with value (structures or enumerations) types. Optional requirements There are times when we want protocols to define optional requirements—that is, methods or properties that are not required to be implemented. To use optional requirements, we need to start off by marking the protocol with the @objc attribute. It is important to note that only classes can adopt protocols that use the @objc attribute. Structures and enumerations cannot adopt these protocols. To mark a property or method as optional, we use the optional keyword. Let's look at how we would use the optional keyword to define optional properties and methods: @objc protocol Phone { var phoneNumber: String {get set} @objc optional var emailAddress: String {get set} func dialNumber() @objc optional func getEmail() } In the Phone protocol we just created, we define a required property named phoneNumber and an optional property named emailAddress. We also defined a required function named dialNumber() and an optional function named getEmail(). Now let's explore how protocol inheritance works. Protocol inheritance Protocols can inherit requirements from one or more other protocols and then add additional requirements. The following code shows the syntax for protocol inheritance: protocol ProtocolThree: ProtocolOne, ProtocolTwo { // Add requirements here } The syntax for protocol inheritance is very similar to class inheritance in Swift, except that we are able to inherit from more than one protocol. Let's see how protocol inheritance works. We will use the FullName protocol that we defined earlier in this section and create a new protocol named Person: protocol Person: FullName { var age: Int {get set} } Now, when we create a type that conforms to the Person protocol, we must implement the requirements defined in the Person protocol, as well as the requirements defined in the FullName protocol. As an example, we could define a Student structure that conforms to the Person protocol as shown in the following code: struct Student: Person { var firstName = "" var lastName = "" var age = 0 func getFullName() -> String { return "(firstName) (lastName)" } } Note that in the Student structure we implemented the requirements defined in both the FullName and Person protocols; however, the only protocol specified when we defined the Student structure was the Person protocol. We only needed to list the Person protocol because it inherited all of the requirements from the FullName protocol. Now let's look at a very important concept in the protocol-oriented programming paradigm: protocol composition. Protocol composition Protocol composition lets our types adopt multiple protocols. This is a major advantage that we get when we use protocols rather than a class hierarchy because classes, in Swift and other single-inheritance languages, can only inherit from one superclass. The syntax for protocol composition is the same as the protocol inheritance that we just saw. The following example shows how to do protocol composition: struct MyStruct: ProtocolOne, ProtocolTwo, Protocolthree { // implementation here } Protocol composition allows us to break our requirements into many smaller components rather than inheriting all requirements from a single superclass or class hierarchy. This allows our type families to grow in width rather than height, which means we avoid creating bloated types that contain requirements that are not needed. Protocol composition may seem like a very simple concept, but it is a concept that is essential to protocol-oriented programming. Let's look at an example of protocol composition so we can see the advantage we get from using it. Let's say that we have the class hierarchy shown in the following diagram: In this class hierarchy, we have a base class named Athlete. The Athlete base class then has two subclasses named Amateur and Pro. These classes are used depending on whether the athlete is an amateur athlete or a pro athlete. An amateur athlete may be a colligate athlete, and we would need to store information such as which school they go to and their GPA. A pro athlete is one that gets paid for playing the game. For the pro athletes, we would need to store information such as what team they play for and their salary. In this example, things get a little messy under the Amateur and Pro classes. As we can see, we have a separate football player class under both the Amateur and Pro classes (the AmFootballPlayer and ProFootballPlayer classes). We also have a separate baseball class under both the Amateur and Pro classes (the AmBaseballPlayer and ProBaseballPlayer classes). This will require us to have a lot of duplicate code between theses classes. With protocol composition, instead of having a class hierarchy where our subclasses inherit all functionality from a single superclass, we have a collection of protocols that we can mix and match in our types. We then use one or more of these protocols as needed for our types. For example, we can create an AmFootballPlayer structure that conforms to the Athlete, Amateur, and Footballplayer protocols. We could also create the ProFootballPlayer structure that conforms to the Athlete, Pro, and Footballplayer protocols. This allows us to be very specific about the requirements for our types and only adopt the requirements that we need. From a pure protocol point of view, this last example may not make a lot of sense right now because protocols only define the requirements. One word of warning: If you find yourself creating numerous protocols that only contain one or two requirements in them, then you are probably making your protocols too granular. This will lead to a design that is hard to maintain and manage. Now let's look at how a protocol is a full-fledged type in Swift. Using protocols as a type Even though no functionality is implemented in a protocol, they are still considered a full-fledged type in the Swift programming language, and can mostly be used like any other type. What this means is that we can use protocols as parameters or return types for a function. We can also use them as the type for variables, constants, and collections. Let's take a look at some examples. For these next few examples, we will use the following PersonProtocol protocol: protocol PersonProtocol { var firstName: String {get set} var lastName: String {get set} var birthDate: Date {get set} var profession: String {get} init (firstName: String, lastName: String, birthDate: Date) } In this PersonProtocol, we define four properties and one initializer. For this first example, we will show how to use a protocol as a parameter and return type for a function, method, or initializer. Within the function itself, we also use the PersonProtocol as the type for a variable: func updatePerson(person: PersonProtocol) -> PersonProtocol { var newPerson: PersonProtocol // Code to update person goes here return newPerson } We can also use protocols as the type to store in a collection, as shown in the next example: var personArray = [PersonProtocol]() var personDict = [String: PersonProtocol]() The one thing we cannot do with protocols is create an instance of one. This is because no functionality is implemented within a protocol. As an example, if we tried to create an instance of the PersonProtocol protocol, as shown in the following example, we would receive the error error: protocol type 'PersonProtocol' cannot be instantiated: var test = PersonProtocol(firstName: "Jon", lastName: "Hoffman", ?birthDate: bDateProgrammer) We can use the instance of any type that conforms to our protocol anywhere that the protocol type is required. As an example, if we define a variable to be of the PersonProtocol protocol type, we can then populate that variable with the instance of any type that conforms to the PersonProtocol protocol. Let's assume that we have two types named SwiftProgrammer and FootballPlayer that conform to the PersonProtocol protocol. We can then use them as shown in this next example: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", ?birthDate: bDatePlayer) In this example, the myPerson variable is defined to be of the PersonProtocol protocol type. We can then set this variable to instances of either of the SwiftProgrammer and FootballPlayer types. One thing to note is that Swift does not care if the instance is a class, structure, or enumeration. It only matters that the type conforms to the PersonProtocol protocol type. As we saw earlier, we can use our PersonProtocol protocol as the type for an array, which means that we can populate the array with instances of any type that conforms to the PersonProtocol protocol. The following is an example of this (note that the bDateProgrammer and bDatePlayer variables are instances of the Date type that would represent the birthdate of the individual): var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) What we are seeing in these last couple of examples is a form of polymorphism. To use protocols to their fullest potential, we need to understand what polymorphism is. Polymorphism with protocols The word polymorphism comes from the Greek roots poly (meaning many) and morphe (meaning form). In programming languages, polymorphism is a single interface to multiple types (many forms). There are two reasons to learn the meaning of the word polymorphism. The first reason is that using such a fancy word can make you sound very intelligent in casual conversion. The second reason is that polymorphism provides one of the most useful programming techniques not only in object-oriented programming, but also protocol-oriented programming. Polymorphism lets us interact with multiple types though a single uniform interface. In the object-oriented programming world the single uniform interface usually comes from a superclass, while in the protocol-oriented programming world that single interface usually comes from a protocol. In the last section, we saw two examples of polymorphism with Swift. The first example was the following code: var myPerson: PersonProtocol myPerson = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) myPerson = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) In this example, we had a single variable of the PersonProtocol type. Polymorphism allowed us to set the variable to instances of any type that conforms to the PersonProtocol protocol, such as the SwiftProgrammer or FootballPlayer types. The other example of polymorphism was in the following code: var programmer = SwiftProgrammer(firstName: "Jon", lastName: "Hoffman", birthDate: bDateProgrammer) var player = FootballPlayer(firstName: "Dan", lastName: "Marino", birthDate: bDatePlayer) var people: [PersonProtocol] = [] people.append(programmer) people.append(player) In this example, we created an array of PersonProtocol types. Polymorphism allowed us to add instances of any types that conform to PersonProtocol to this array. When we access an instance of a type though a single uniform interface, as we just showed, we are unable to access type-specific functionality. As an example, if we had a property in the FootballPlayer type that records the age of the player, we would be unable to access that property because it is not defined in the PeopleProtocol protocol. If we do need to access type-specific functionality, we can use type casting. Type casting with protocols Type casting is a way to check the type of an instance and/or to treat the instance as a specified type. In Swift, we use the is keyword to check whether an instance is of a specific type and the as keyword to treat an instance as a specific type. The following example shows how we would use the is keyword: if person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } In this example, the conditional statement returns true if the person instance is of the SwiftProgrammer type or false if it isn't. We can also use the switch statement (as shown in the next example) if we want to check for multiple types: for person in people { switch (person) { case is SwiftProgrammer: print("(person.firstName) is a Swift Programmer") case is FootballPlayer: print("(person.firstName) is a Football Player") default: print("(person.firstName) is an unknown type") } } We can use the where statement in combination with the is keyword to filter an array to only return instances of a specific type. In the next example, we filter an array that contains instances of the PersonProtocol to only return those elements of the array that are instances of the SwiftProgrammer type: for person in people where person is SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Now let's look at how we would cast an instance to a specific type. To do this, we can use the as keyword. Since the cast can fail if the instance is not of the specified type, the as keyword comes in two forms: as? and as!. With the as? form, if the casting fails it returns a nil; with the as! form, if the casting fails we get a runtime error. Therefore, it is recommended to use the as? form unless we are absolutely sure of the instance type or we perform a check of the instance type prior to doing the cast. The following example shows how we would use the as? keyword to attempt to cast an instance of a variable to the SwiftProgammer type: if let p = person as? SwiftProgrammer { print("(person.firstName) is a Swift Programmer") } Since the as? keyword returns an optional, in the last example we could use optional binding to perform the cast. If we are sure of the instance type, we can use the as! keyword as shown in the next example: for person in people where person is SwiftProgrammer { let p = person as! SwiftProgrammer } Summary While protocol-oriented programming is about so much more than just the protocol, it would be impossible to have the protocol-oriented programming paradigm without the protocol. We can think of the protocol as the backbone of protocol-oriented programming. Therefore, it is important to fully understand the protocol in order to properly implement protocol-oriented programming. Resources for Article: Further resources on this subject: Using Protocols and Protocol Extensions [Article] What we can learn from attacks on the WEP Protocol [Article] Hosting the service in IIS using the TCP protocol [Article]
Read more
  • 0
  • 0
  • 1635

article-image-intro-canvas-animations
Dylan Frankland
21 Nov 2016
11 min read
Save for later

Intro to Canvas Animations

Dylan Frankland
21 Nov 2016
11 min read
I think that most developers who have not had much experience with Canvas will generally believe that it is unnecessary, non-performant, or difficult to work with. Canvas's flexibility lends itself to be experimented with and compared against other solutions in the browser. You find comparison articles all over the web comparing things like animations, filter effects, GPU rendering, you name it with Canvas. The way that I look at it is that Canvas is best suited to creating custom shapes and effects that hook into events or actions in the DOM. Doing anything else should be left up to CSS considering it can handle most animations, transitions, and normal static shapes. Mastering Canvas opens a world of possibilities to create anything such as video games, smartwatch interfaces, and even more complex custom 3D graphics. The first step is understanding canvas in 2D and animating some elements before getting into more complex logic. A simple idea that might demonstrate the abilities of Canvas is to create a Cortana- or HAL-like interface that changes shape according to audio. First we will create the interface, a simple glowing circle, then we will practice animating it. Before getting started, I assume you will be using a newer version of Chrome. If you are using a non-WebKit browser, some of the APIs used may be slightly different. To get started, let us create a simple index.html page with the following markup: <!doctype html> <html> <head> <title>Demo</title> <style> body { padding: 0; margin: 0; background-color: #000; } </style> </head> <body> <canvas></canvas> <script></script> </body> </html> Obviously, this is nothing special yet, it just holds the simple HTML for us to begin coding our JavaScript. Next we will start to fill in the <script> block below the <canvas> element inside of the body. The first part of your script will have to get the reference to the <canvas> element, then set its size; for this experiment, let us just set the size to the entire window: (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; })(); Again, refreshing the page will not show much. This time let's render a small circle to begin with: (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, 50, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.fillStyle = '#8ED6FF'; // Set the color of the circle, make it a cool AI blue context.fill(); // Actually fill the shape in and close it })(); Alright! Now we have something that we can really start to work with. If you refresh that HTML page you should now be staring at a blue dot in the center of your screen. I do not know about you, but blue dots are sort of boring, nothing that interesting. Let's switch it up and create a more movie-esque AI-looking circle. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, 50, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = 60; // Set the amount of "glow" context.stroke(); // Draw the shape and close it })(); That is looking way better now. Next, let's start to animate this without hooking it up to anything. The standard way to do this is to create a function that alters the canvas each time that the DOM is able to render itself. We do that by creating a function which passes a reference to itself to window.requestAnimationFrame. requestAnimationFrame takes any function and executes it once the window is focused and has processing power to render another frame. This is a little confusing, but it is the best way to create smooth animations. Check out the example below: (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; const drawCircle = () => { context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, 50, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = 60; // Set the amount of "glow" context.stroke(); // Draw the shape and close it window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); This will not create any type of animation yet, but if you put a console.log inside the drawCircle function you would see a bunch of logs in the console, probably close to 60 times per second. The next step is to add some state to the this function and make it change size. We can do that by creating a size variable with an integer that we change up and down, plus another boolean variable called grow to keep track of the direction that we will change size. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); Refreshing the page and seeing nothing happen might be a little disheartening, but do not worry you are not the only one! Canvases require being cleared before being drawn upon again. Now we have created a way to change and update the canvas, but we need to introduce a way to clear it. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.clearRect(0, 0, canvas.width, canvas.height); // Clear the contents of the canvas starting from [0, 0] all the way to the [totalWidth, totalHeight] context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); Now things may seems even more broken. Remember that context.translate function? That is setting the origin of all the commands to the middle of the canvas. To prevent that from messing up our canvas when we try to clear it, we will need to save the state of the canvas and restore before hand. (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.restore(); // Restore previous canvas state, does nothing if it wasn't saved before context.save(); // Saves it until the next time `drawCircle` is called context.clearRect(0, 0, canvas.width, canvas.height); // Clear the contents of the canvas starting from [0, 0] all the way to the [totalWidth, totalHeight] context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); Oh snap! Now you have got some seriously cool canvas animation going on. The final code should look like this: <!doctype html> <html> <head> <title>Demo</title> <style> body { padding: 0; margin: 0; background-color: #000; } </style> </head> <body> <canvas></canvas> <script> (() => { const canvas = document.querySelector('canvas'); canvas.width = window.innerWidth; canvas.height = window.innerHeight; const context = canvas.getContext('2d'); // Set the context to create a 2D graphic const color = '#8ED6FF'; let size = 50; // Default size is the minimum size let grow = true; const drawCircle = () => { context.restore(); context.save(); context.clearRect(0, 0, canvas.width, canvas.height); // Clear the contents of the canvas starting from [0, 0] all the way to the [totalWidth, totalHeight] context.translate(canvas.width / 2, canvas.height / 2); // The starting point for the graphic to be in the middle context.beginPath(); // Start to create a shape context.arc(0, 0, size, 0, 2 * Math.PI, false); // Draw a circle, at point [0, 0] with a radius of 50, and make it 360 degrees (aka 2PI) context.strokeStyle = color; // Set the stroke color context.shadowColor = color; // Set the "glow" color context.lineWidth = 10; // Set the circle/ring width context.shadowBlur = size + 10; // Set the amount of "glow" context.stroke(); // Draw the shape and close it // Check if the size needs to grow or shrink if (size <= 50) { // Minimum size grow = true; } else if (size >= 75) { // Maximum size grow = false; } // Grow or shrink the size size = size + (grow ? 1 : -1); window.requestAnimationFrame(drawCircle); // Continue drawing circle }; window.requestAnimationFrame(drawCircle); // Start animation })(); </script> </body> </html> Hopefully, this should give you a great idea on how to get started and create some really cool animations. The next logical step to this example is to hook it up to an audio so that it can react to voice like your own personal AI. Author Dylan Frankland is a frontend engineer at Narvar. He is an agile web developer with over 4 years of experience in developing and designing for start-ups and medium-sized businesses to create functional, fast, and beautiful web experiences.
Read more
  • 0
  • 0
  • 1933
article-image-create-dynamic-tilemaps-duality-c-part-ii
Lőrinc Serfőző
21 Nov 2016
7 min read
Save for later

Create Dynamic Tilemaps in Duality (C#) – Part II

Lőrinc Serfőző
21 Nov 2016
7 min read
Introduction In the first part of this tutorial, we learned how static tilemaps are created in the Duality game engine. Let's now take it to a higher level. In this article, the process of implementing a custom component is described. The new component modifies the tilemap at runtime, controlled by mouse clicks. Although it is a simple example, the method enables more advanced uses, such as procedural generation of game levels or destructible terrain. The repository containing the finished project is available on GitHub. Creating Custom Components It has been already mentioned that all functionality in Duality is implemented via plugins. To create a custom component, you have to compile a new Core Plugin. Although it might seem convoluted at first, Duality does most of the work and setup, so let's dive in! First, open the Visual Studio Solution by clicking on the 'Open Sourcecode' icon button, which is the second one on the toolbar. Alternatively, just open the ProjectPlugins.sln file located in '{Duality Project Dir}'. Once the IDE is open, inspect the sole loaded project called 'CorePlugin'. It has two source files (not counting AssemblyInfo.cs), and references to the Duality assembly. CorePlugin.cs contains a class inherited from CorePlugin. It's necessary to have this class in the solution to identify the assembly as a plugin, but usually it does not need to be modified, because game logic is implemented via custom components in most of the time. Let's have a look at the other class located in 'YourCustomComponentType.cs': using System; using System.Collections.Generic; using System.Linq; using Duality; namespace Duality_ { public class YourCustomComponentType : Component { } } There are a few important things to notice here. The custom component must be a subclass of Component. It has to be declared public; otherwise the new component wouldn't appear in the editor. Don't modify the code for now, but hit F7 to compile the assembly. Behind the scenes, the output assembly named 'GamePlugin.core.dll' (along with the debug symbol file and the xml documentation) is copied to '{Duality Project Dir}', and Dualitor loads them. The new component is available for being added to the game, like any other component: Adding the custom component At this point, the new component could be added to GameObjects, but it would not do anything yet. The next sections are about how to implement the game logic in the component. Laying the structure of ChangeTilesetCmp Adding a reference of the Tilemap Plugin to the VS project To access tilemap-related classes and information in the component, the Visual Studio project must have a reference to the assembly containing the Tilemap Plugin. Bring up the context menu on the 'CorePlugin' project's 'References' item in the Solution Explorer, and select the 'Add Reference' item. A dialog should appear; browse 'Tilemaps.core.dll' from the '{Duality Project Dir}'. Adding the Tilemap Plugin as a reference Defining the internal structure of ChangeTilemapCmp Rename the YourCustomComponentType class to ChangeTilemapCmp, and do the same with the container .cs file to stay consistent. First describe the function signatures used in the component: using Duality; using Duality.Components; using Duality.Editor; using Duality.Input; using Duality.Plugins.Tilemaps; namespace Duality_ [EditorHintCategory ("Tilemaps Tutorial")] // [1] public class ChangeTilemapCmp : Component, ICmpUpdatable { // [2] private Tilemap TilemapInScene { get; } private TilemapRenderer TilemapRendererInScene { get; } private Camera MainCamera { get; } void ICmpUpdatable.OnUpdate () // [3] { } private Vector2 GetWorldCoordOfMouse () // [4] { } private void ChangeTilemap (Vector2 worldPos) // [5] { } } } The EditorHintCategory attribute describes in which folder the component should appear when adding it. Several get-only properties are used to access specific components in the scene. See their implementation below. The OnUpdate function implements the ICmpUpdatable interface. It's called upon every update of the game loop. GetWorldCoordOfMouse returns the current position of the mouse transformed into the game world coordinates. ChangeTilemap, as its name suggests, changes the tilemap at a specified world location. It should not do anything if the location is not on the tilemap. After designing our little class, let's implement the details! Implementing the game logic in ChangeTilemapCmp Implementing the get-only properties The Tilemap and TilemapRenderer components are obviously needed by our logic. Since there is only one instance of them in the scene, it's easy to find them by type: private Tilemap TilemapInScene => this.GameObj.ParentScene.FindComponent<Tilemap> (); private TilemapRenderer TilemapRendererInScene => this.GameObj.ParentScene.FindComponent<TilemapRenderer>(); private Camera MainCamera => this.GameObj.ParentScene.FindComponent<Camera> (); Implementing the OnUpdate method This method is called for every frame, usually 60 times a second. We check if the left mouse button was pressed in that frame, and if yes, act accordingly: void ICmpUpdatable.OnUpdate () { if (DualityApp.Mouse.ButtonHit (MouseButton.Left)) ChangeTilemap (GetWorldCoordOfMouse ()); } Implementing the GetWorldCoordOfMouse method Here a simple transformation is needed. The DualityApp.Mouse.Pos property returns the mouse position on the screen. After a null-check, get the mouse position on the screen; then convert it to world position using the Camera's GetSpaceCoord method. private Vector2 GetWorldCoordOfMouse () { if (MainCamera == null) return Vector2.Zero; Vector2 mouseScreenPos = DualityApp.Mouse.Pos; return MainCamera.GetSpaceCoord (mouseScreenPos).Xy; } Implementing the ChangeTilemap method The main logic of ChangeTilemapCmp is implemented in this method: private void ChangeTilemap (Vector2 worldPos) { // [1] Tilemap tilemap = TilemapInScene; TilemapRenderer tilemapRenderer = TilemapRendererInScene; if (tilemap == null || tilemapRenderer == null) { Log.Game.WriteError("There are no tilemaps in the current scene!"); return; } // [2] Vector2 localPos = worldPos - tilemapRenderer.GameObj.Transform.Pos.Xy; // [3] Point2 tilePos = tilemapRenderer.GetTileAtLocalPos (localPos, TilePickMode.Reject); if (tilePos.X < 0 || tilePos.Y < 0) return; // [4] Tile clickedTile = tilemap.Tiles[tilePos.X, tilePos.Y]; int newTileIndex = clickedTile.BaseIndex == 0 ? 1 : 0; clickedTile.BaseIndex = newTileIndex; tilemap.SetTile(tilePos.X, tilePos.Y, clickedTile); } First check if there is any Tilemap and TilemapRenderer in the Scene. Transform the world position to the tilemap's local coordinate system by subtracting the tilemap's position from it. Acquire the indexed location of the clicked tile by the GetTileAtLocalPos method of the renderer. It returns a Point2, an integer-based vector, which represents the row and column of the tile in the tilemap. Because TilePickMode.Reject is passed as the second argument, it returns {-1, -1} when the 'player' clicks next to the tilemap. We should check that too, and return if that is the case. Get the Tile struct at the clicked position, and flip its tile index. Index 0 means the green tile, while index 1 refers to the red one. Afterwards, the tile has to be assigned back to the Tilemap, because in C#, structs are copied by value, not by reference. After finishing this, do not forget to compile the source again. Adding and testing ChangeTilemapCmp Create a new GameObject with ChangeTilemapCmp on it in the Scene View: Adding ChangeTilemapCmp Then save the scene with the floppy icon of the Scene View, and start the game by clicking the 'Run Game' icon button located on the toolbar. Test the game by clicking on the tilemap; it should change color on the tile you click. Running the game Summary Thank you for following along with this guide! Hopefully it helped you with the Duality game engine and its Tilemap Plugin. Remember that the above was a very simple example, but more sophisticated logic is achievable using these tools. If you want some inspiration, have a look at the entries of the Duality Tilemaps Jam, which took place in September, 2016. Author Lőrinc Serfőző is a software engineer at Graphisoft, the company behind the BIM solution ArchiCAD. He is studying mechatronics engineering at the Budapest University of Technology and Economics, an interdisciplinary field between the more traditional mechanical engineering, electrical engineering and informatics, and has quickly grown a passion toward software development. He is a supporter of open source and contributes to the C# and OpenGL-based Duality game engine, creating free plugins and tools for users.
Read more
  • 0
  • 0
  • 2275

article-image-how-create-2d-navigation-godot-engine-0
George Marques
18 Nov 2016
6 min read
Save for later

How to Create 2D Navigation with the Godot Engine

George Marques
18 Nov 2016
6 min read
The Godot Engine has built-in functionalities that makes it easy to create navigation in the game world. This post will cover how to make an object follow a fixed path and how to go between two points avoiding the obstacles in the way. Following a fixed path Godot has a couple of nodes that help you create a path that can be followed by another node. One use of this is to make an NPC follow a fixed path in the map. Assuming you have a new project, create a Path2D node. You can then use the controls on the toolbar to make a curve representing the path you will need to follow. Curve buttons After adding the points and adjusting the curves, you will have something like the following: Path curve Now you need to add a PathFollow2D node as a child of Path2D. This will do the actual movement based on the Offset property. Then add an AnimationPlayer node as child of the PathFollow2D. Create a new animation in the player. Set the length to five seconds. Add a keyframe on the start with value 0 for the Unit Offset property of PathFollow2D. You can do that by clicking on the key icon next to the property in the Inspector dock. Then go to the end of the animation and add a frame with the value of 1. This will make Unit Offset go from 0 to 1 in the period of the animation (five seconds in this case). Set the animation to loop and autoplay. To see the effect in practice, add a Sprite node as child of PathFollow2D. You can use the default Godot icon as the texture for it. Enable the Visible Navigation under the Debug Options menu (last button in the top center bar) to make it easier to see. Save the scene and play it to see the Godot robot run around the screen: Sprite following path That's it! Making an object follow a fixed path is quite easy with the built-in resources of Godot. Not even scripting is needed for this example. Navigation and Avoiding Obstacles Sometimes you don't have a fixed path to follow. It might change dynamically or your AI must determine the path and avoid the obstacles and walls that might be in the way. Don't worry because Godot will also help you in this regard. Create a new scene and add a Node2D as the root. Then add a Navigation2D as its child. This will be responsible for creating the paths for you. You now need to add a NavigationPolygonInstance node as child of the Navigation2D. This will hold the polygon used for navigation, to determine what the passable areas are. To create the polygon itself, click on the pencil button on the toolbar (it will appear only if the NavigationPolygonInstance node is selected). The first time you try to add a point, the editor will warn you that there's no NavigationPolygon resource and will offer you to create one. Click on the Create button and all will be set. Navitation resource warning First you need to create the outer boundaries of the navigable area. The polygon can have as many points as you need, but it does need to be a closed polygon. Note that you can right-click on points to remove them and hold Ctrl while clicking on lines to add points. Once you finish the boundaries, click the pencil button again and create polygons inside it to make the impassable areas, such as the walls. You will end up with something like the following: Navigation polygon Add a Sprite node as child of the root Node2D and set the texture of it (you can use the default Godot icon). This will be the object navigating through the space. Now add the following script to the root node. The most important detail here is the get_simple_path function, which returns a list of points to travel from start to end without passing through the walls. extends Node2D # Global variables var start = Vector2() var end = Vector2() var path = [] var speed = 1 var transition = 0 var path_points = 0 func _ready(): # Enable the processing of user input set_process_input(true) # Enable the general process callback set_process(true) func _input(event): # If the user press a mouse button if event.type == InputEvent.MOUSE_BUTTON and event.pressed: if event.button_index == BUTTON_LEFT: # If it's the left button, set the starting point start = event.global_pos elif event.button_index == BUTTON_RIGHT: # If it's the right button, set the ending point end = event.global_pos # Reset the sprite position get_node("Sprite").set_global_pos(start) transition = 0 func _process(delta): # Get the list of points that compose the path path = get_node("Navigation2D").get_simple_path(start, end) # If the path has less points than it did before, reset the transition point if path.size() < path_points: transition = 0 # Update the current amount of points path_points = path.size() # If there's less than 2 points, nothing can be done if path_points < 2: return var sprite = get_node("Sprite") # This uses the linear interpolation function from Vector2 to move the sprite in a constant # rate through the points of the path. Transition is a value from 0 to 1 to be used as a ratio. sprite.set_global_pos(sprite.get_global_pos().linear_interpolate(path[1], transition)) start = sprite.get_global_pos() transition += speed * delta # Reset the transition when it gets to the point. if transition > 1: transition = 0 # Update the node so the _draw() function is called update() func _draw(): # This draw a white circle with radius of 10px for each point in the path for p in path: draw_circle(p, 10, Color(1, 1, 1)) Enable the Visible Navigation in the Debug Options button to help you visualize the effect. Save and run the scene. You can then left-click somewhere to define a starting point, and right-click to define the ending point. The points will be marked as white circles and the Sprite will follow the path, clearing the intermediate points as it travels along. Navigating Godot bot Conclusion The Godot Engine has many features to ease the development of all kinds of games. The navigation functions have many utilities in top-down games, be it an RPG or an RTS. Tilesets also embed navigation polygons that can be used in a similar fashion. About the Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 1
  • 28876

article-image-introducing-algorithm-design-paradigms
Packt
18 Nov 2016
10 min read
Save for later

Introducing Algorithm Design Paradigms

Packt
18 Nov 2016
10 min read
In this article by David Julian and Benjamin Baka, author of the book Python Data Structures and Algorithm, we will discern three broad approaches to algorithm design. They are as follows: Divide and conquer Greedy algorithms Dynamic programming   (For more resources related to this topic, see here.) As the name suggests, the divide and conquer paradigm involves breaking a problem into smaller subproblems, and then in some way combining the results to obtain a global solution. This is a very common and natural problem solving technique and is, arguably, the most used approach to algorithm design. Greedy algorithms often involve optimization and combinatorial problems; the classic example is applying it to the traveling salesperson problem, where a greedy approach always chooses the closest destination first. This shortest path strategy involves finding the best solution to a local problem in the hope that this will lead to a global solution. The dynamic programming approach is useful when our subproblems overlap. This is different from divide and conquer. Rather than breaking our problem into independent subproblems, with dynamic programming, intermediate results are cached and can be used in subsequent operations. Like divide and conquer, it uses recursion. However, dynamic programing allows us to compare results at different stages. This can have a performance advantage over divide and conquer for some problems because it is often quicker to retrieve a previously calculated result from memory rather than having to recalculate it. Recursion and backtracking Recursion is particularly useful for divide and conquer problems; however, it can be difficult to understand exactly what is happening, since each recursive call is itself spinning off other recursive calls. At the core of a recursive function are two types of cases. Base cases, which tell the recursion when to terminate and recursive cases that call the function they are in. A simple problem that naturally lends itself to a recursive solution is calculating factorials. The recursive factorial algorithm defines two cases—the base case, when n is zero, and the recursive case, when n is greater than zero. A typical implementation is shown in the following code: def factorial(n): #test for a base case if n==0: return 1 # make a calculation and a recursive call f= n*factorial(n-1) print(f) return(f) factorial(4) This code prints out the digits 1, 2, 4, 24. To calculate 4!, we require four recursive calls plus the initial parent call. On each recursion, a copy of the methods variables is stored in memory. Once the method returns, it is removed from memory. Here is a way to visualize this process: It may not necessarily be clear if recursion or iteration is a better solution to a particular problem, after all, they both repeat a series of operations and both are very well suited to divide and conquer approaches to algorithm design. An iteration churns away until the problem is done. Recursion breaks the problem down into smaller chunks and then combines the results. Iteration is often easier for programmers because the control stays local to a loop, whereas recursion can more closely represent mathematical concepts such as factorials. Recursive calls are stored in memory, whereas iterations are not. This creates a tradeoff between processor cycles and memory usage, so choosing which one to use may depend on whether the task is processor or memory intensive. The following table outlines the key differences between recursion and iteration. Recursion Iteration Terminates when a base case is reached Terminates when a defined condition is met Each recursive call requires space in memory Each iteration is not stored in memory An infinite recursion results in a stack overflow error An infinite iteration will run while the hardware is powered Some problems are naturally better suited to recursive solutions Iterative solutions may not always be obvious Backtracking Backtracking is a form of recursion that is particularly useful for types of problems such as traversing tree structures where we are presented with a number of options at each node, from which we must choose one. Subsequently, we are presented with a different set of options, and depending on the series of choices made, either a goal state or a dead end is reached. If it is the latter, we mast backtrack to a previous node and traverse a different branch. Backtracking is a divide and conquer method for exhaustive search. Importantly, backtracking prunes branches that cannot give a result. An example of back tracking is given by the following. Here, we have used a recursive approach to generating all the possible permutations of a given string, s, of a given length n: def bitStr(n, s): if n == 1: return s return [ digit + bits for digit in bitStr(1,s)for bits in bitStr(n - 1,s)] print (bitStr(3,'abc')) This generates the following output: Note the double list compression and the two recursive calls within this comprehension. This recursively concatenates each element of the initial sequence, returned when n = 1, with each element of the string generated in the previous recursive call. In this sense, it is backtracking to uncover previously ungenerated combinations. The final string that is returned is all n letter combinations of the initial string. Divide and conquer – long multiplication For recursion to be more than just a clever trick, we need to understand how to compare it to other approaches, such as iteration, and to understand when it is use will lead to a faster algorithm. An iterative algorithm that we are all familiar with is the procedure you learned in primary math classes, which was used to multiply two large numbers, that is, long multiplication. If you remember, long multiplication involved iterative multiplying and carry operations followed by a shifting and addition operation. Our aim here is to examine ways to measure how efficient this procedure is and attempt to answer the question, is this the most efficient procedure we can use for multiplying two large numbers together? In the following figure, we can see that multiplying two 4-digit numbers together requires 16 multiplication operations, and we can generalize to say that an n digit number requires, approximately, n2 multiplication operations: This method of analyzing algorithms, in terms of number of computational primitives such as multiplication and addition, is important because it can give a way to understand the relationship between the time it takes to complete a certain computation and the size of the input to that computation. In particular, we want to know what happens when the input, the number of digits, n, is very large. Can we do better? A recursive approach It turns out that in the case of long multiplication, the answer is yes, there are in fact several algorithms for multiplying large numbers that require fewer operations. One of the most well-known alternatives to long multiplication is the Karatsuba algorithm, published in 1962. This takes a fundamentally different approach: rather than iteratively multiplying single digit numbers, it recursively carries out multiplication operation on progressively smaller inputs. Recursive programs call themselves on smaller subset of the input. The first step in building a recursive algorithm is to decompose a large number into several smaller numbers. The most natural way to do this is to simply split the number into halves: the first half comprising the most significant digits and the second half comprising the least significant digits. For example, our four-digit number, 2345, becomes a pair of two digit numbers, 23 and 45. We can write a more general decomposition of any two n-digit numbers x and y using the following, where m is any positive integer less than n. For x-digit number: For y-digit number: So, we can now rewrite our multiplication problem x and y as follows: When we expand and gather like terms we get the following: More conveniently, we can write it like this (equation 1): Here, It should be pointed out that this suggests a recursive approach to multiplying two numbers since this procedure itself involves multiplication. Specifically, the products ac, ad, bc, and bd all involve numbers smaller than the input number, and so it is conceivable that we could apply the same operation as a partial solution to the overall problem. This algorithm, so far consists of four recursive multiplication steps and it is not immediately clear if it will be faster than the classic long multiplication approach. What we have discussed so far in regards to the recursive approach to multiplication was well known to mathematicians since the late 19th century. The Karatsuba algorithm improves on this is by making the following observation. We really only need to know three quantities, z2 = ac, z1=ad +bc, and z0 = bd to solve equation 1. We need to know the values of a, b, c, and d as they contribute to the overall sum and products involved in calculating the quantities z2, z1, and z0. This suggests the possibility that perhaps we can reduce the number of recursive steps. It turns out that this is indeed the situation. Since the products ac and bd are already in their simplest form, it seems unlikely that we can eliminate these calculations. We can, however, make the following observation: When we subtract the quantities ac and bd, which we have calculated in the previous recursive step, we get the quantity we need, namely ad + bc: This shows that we can indeed compute the sum of ad and bc without separately computing each of the individual quantities. In summary, we can improve on equation 1 by reducing from four recursive steps to three. These three steps are as follows: Recursively calculate ac. Recursively calculate bd. Recursively calculate (a +b)(c + d) and subtract ac and bd. The following code shows a Python implementation of the Karatsuba algorithm: from math import log10 def karatsuba(x,y): # The base case for recursion if x < 10 or y < 10: return x*y #sets n, the number of digits in the highest input number n = max(int(log10(x)+1), int(log10(y)+1)) # rounds up n/2 n_2 = int(math.ceil(n / 2.0)) #adds 1 if n is uneven n = n if n % 2 == 0 else n + 1 #splits the input numbers a, b = divmod(x, 10**n_2) c, d = divmod(y, 10**n_2) #applies the three recursive steps ac = karatsuba(a,c) bd = karatsuba(b,d) ad_bc = karatsuba((a+b),(c+d)) - ac - bd #performs the multiplication return (((10**n)*ac) + bd + ((10**n_2)*(ad_bc))) To satisfy ourselves that this does indeed work, we can run the following test function: import random def test(): for i in range(1000): x = random.randint(1,10**5) y = random.randint(1,10**5) expected = x * y result = karatsuba(x, y) if result != expected: return("failed") return('ok') Summary In this article, we looked at a way to recursively multiply large numbers and also a recursive approach for merge sort. We saw how to use backtracking for exhaustive search and generating strings. Resources for Article: Further resources on this subject: Python Data Structures [article] How is Python code organized [article] Algorithm Analysis [article]
Read more
  • 0
  • 0
  • 24738
article-image-suggesters-improving-user-search-experience
Packt
18 Nov 2016
11 min read
Save for later

Suggesters for Improving User Search Experience

Packt
18 Nov 2016
11 min read
In this article by Bharvi Dixit, the author of the book Mastering ElasticSearch 5.0 - Third Edition, we will focus on the topics for improving the user search experience using suggesters, which allows you to correct user query spelling mistakes and build efficient autocomplete mechanisms. First, let's look on the query possibilities and the responses returned by Elasticsearch. We will try to show you the general principles, and then we will get into more details about each of the available suggesters. (For more resources related to this topic, see here.) Using the suggester under search Before Elasticsearch 5.0, there was a possibility to get suggestions for a given text by using a dedicated _suggest REST endpoint. But in Elasticsearch 5.0, this dedicated _suggest endpoint has been deprecated in favor of using suggest API. In this release, the suggest only search requests have been optimized for performance reasons and we can execute the suggetions _search endpoint. Similar to query object, we can use a suggest object and what we need to provide inside suggest object is the text to analyze and the type of used suggester (term or phrase). So if we would like to get suggestions for the words chrimes in wordl (note that we've misspelled the word on purpose), we would run the following query: curl -XPOST "http://localhost:9200/wikinews/_search?pretty" -d' { "suggest": { "first_suggestion": { "text": "chrimes in wordl", "term": { "field": "title" } } } }' The dedicated endpoint _suggest has been deprecated in Elasticsearch version 5.0 and might be removed in future releases, so be advised to use suggestion request under _search endpoint. All the examples covered in this article usage the same _search endpoint for suggest request. As you can see, the suggestion request wrapped inside suggest object and is send to Elasticsearch in its own object with the name we chose (in the preceding case, it is first_suggestion). Next, we specify the text for which we want the suggestion to be returned using the text parameter. Finally, we add the suggester object, which is either term or phrase. The suggester object contains its configuration, which for the term suggester used in the preceding command, is the field we want to use for suggestions (the field property). We can also send more than one suggestion at a time by adding multiple suggestion names. For example, if in addition to the preceding suggestion, we would also include a suggestion for the word arest, we would use the following command: curl -XPOST "http://localhost:9200/wikinews/_search?pretty" -d' { "suggest": { "first_suggestion": { "text": "chrimes in wordl", "term": { "field": "title" } }, "second_suggestion": { "text": "arest", "term": { "field": "text" } } } }' Understanding the suggester response Let's now look at the example response for the suggestion query we have executed. Although the response will differ for each suggester type, let's look at the response returned by Elasticsearch for the first command we've sent in the preceding code that used the term suggester: { "took" : 5, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "hits" : { "total" : 0, "max_score" : 0.0, "hits" : [ ] }, "suggest" : { "first_suggestion" : [ { "text" : "chrimes", "offset" : 0, "length" : 7, "options" : [ { "text" : "crimes", "score" : 0.8333333, "freq" : 36 }, { "text" : "choices", "score" : 0.71428573, "freq" : 2 }, { "text" : "chrome", "score" : 0.6666666, "freq" : 2 }, { "text" : "chimps", "score" : 0.6666666, "freq" : 1 }, { "text" : "crimea", "score" : 0.6666666, "freq" : 1 } ] }, { "text" : "in", "offset" : 8, "length" : 2, "options" : [ ] }, { "text" : "wordl", "offset" : 11, "length" : 5, "options" : [ { "text" : "world", "score" : 0.8, "freq" : 436 }, { "text" : "words", "score" : 0.8, "freq" : 6 }, { "text" : "word", "score" : 0.75, "freq" : 9 }, { "text" : "worth", "score" : 0.6, "freq" : 21 }, { "text" : "worst", "score" : 0.6, "freq" : 16 } ] } ] } } As you can see in the preceding response, the term suggester returns a list of possible suggestions for each term that was present in the text parameter of our first_suggestion section. For each term, the term suggester will return an array of possible suggestions with additional information. Looking at the data returned for the wordl term, we can see the original word (the text parameter), its offset in the original text parameter (the offset parameter), and its length (the length parameter). The options array contains suggestions for the given word and will be empty if Elasticsearch doesn't find any suggestions. Each entry in this array is a suggestion and is characterized by the following properties: text: This is the text of the suggestion. score: This is the suggestion score; the higher the score, the better the suggestion will be. freq: This is the frequency of the suggestion. The frequency represents how many times the word appears in documents in the index we are running the suggestion query against. The higher the frequency, the more documents will have the suggested word in its fields and the higher the chance that the suggestion is the one we are looking for. Please remember that the phrase suggester response will differ from the one returned by the terms suggester, The term suggester The term suggester works on the basis of the edit distance, which means that the suggestion with fewer characters that needs to be changed or removed to make the suggestion look like the original word is the best one. For example, let's take the words worl and work. In order to change the worl term to work, we need to change the l letter to k, so it means a distance of one. Of course, the text provided to the suggester is analyzed and then terms are chosen to be suggested. The phrase suggester The term suggester provides a great way to correct user spelling mistakes on a per-term basis. However, if we would like to get back phrases, it is not possible to do that when using this suggester. This is why the phrase suggester was introduced. It is built on top of the term suggester and adds additional phrase calculation logic to it so that whole phrases can be returned instead of individual terms. It uses N-gram based language models to calculate how good the suggestion is and will probably be a better choice to suggest whole phrases instead of the term suggester. The N-gram approach divides terms in the index into grams—word fragments built of one or more letters. For example, if we would like to divide the word mastering into bi-grams (a two letter N-gram), it would look like this: ma as st te er ri in ng. The completion suggester Till now we read about term suggester and phrase suggester which are used for providing suggestions but completion suggester is completely different and it is used for as a prefix-based suggester for allowing us to create the autocomplete (search as you type) functionality in a very performance-effective way because of storing complicated structures in the index instead of calculating them during query time. This suggester is not about correcting user spelling mistakes. In Elasticsearch 5.0, Completion suggester has gone through complete rewrite. Both the syntax and data structures of completion type field have been changed and so is the response structure. There are many new exciting features and speed optimizations have been introduced in the completion suggester. One of these features is making completion suggester near real time which allows deleted suggestions to omit from suggestion results as soon as they are deleted. The logic behind the completion suggester The prefix suggester is based on the data structure called Finite State Transducer (FST) ( For more information refer, http://en.wikipedia.org/wiki/Finite_state_transducer). Although it is highly efficient, it may require significant resources to build on systems with large amounts of data in them: systems that Elasticsearch is perfectly suitable for. If we would like to build such a structure on the nodes after each restart or cluster state change, we may lose performance. Because of this, the Elasticsearch creators decided to use an FST-like structure during index time and store it in the index so that it can be loaded into the memory when needed. Using the completion suggester To use a prefix-based suggester we need to properly index our data with a dedicated field type called completion. It stores the FST-like structure in the index. In order to illustrate how to use this suggester, let's assume that we want to create an autocomplete feature to allow us to show book authors, which we store in an additional index. In addition to author's names, we want to return the identifiers of the books they wrote in order to search for them with an additional query. We start with creating the authors index by running the following command: curl -XPUT "http://localhost:9200/authors" -d' { "mappings": { "author": { "properties": { "name": { "type": "keyword" }, "suggest": { "type": "completion" } } } } }' Our index will contain a single type called author. Each document will have two fields: the name field, which is the name of the author, and the suggest field, which is the field we will use for autocomplete. The suggest field is the one we are interested in; we've defined it using the completion type, which will result in storing the FST-like structure in the index. Implementing your own autocompletion Completion suggester has been designed to be a powerful and easily implemented solution for autocomplete but it supports only prefix query. Most of the time autocomplete need only work as a prefix query for example, If I type elastic then I expect elasticsearch as a suggestion, not nonelastic. There are some use cases, when one wants to implement more general partial word completion. Completion suggester fails to fulfill this requirement. The second limitation of completion suggester is, it does not allow advance queries and filters searched. To get rid of both these limitations we are going to implement a custom autocomplete feature based on N-gram, which works in almost all the scenarios. Creating index Lets create an index location-suggestion with following settings and mappings: curl -XPUT "http://localhost:9200/location-suggestion" -d' { "settings": { "index": { "analysis": { "filter": { "nGram_filter": { "token_chars": [ "letter", "digit", "punctuation", "symbol", "whitespace" ], "min_gram": "2", "type": "nGram", "max_gram": "20" } }, "analyzer": { "nGram_analyzer": { "filter": [ "lowercase", "asciifolding", "nGram_filter" ], "type": "custom", "tokenizer": "whitespace" }, "whitespace_analyzer": { "filter": [ "lowercase", "asciifolding" ], "type": "custom", "tokenizer": "whitespace" } } } } }, "mappings": { "locations": { "properties": { "name": { "type": "text", "analyzer": "nGram_analyzer", "search_analyzer": "whitespace_analyzer" }, "country": { "type": "keyword" } } } } }' Understanding the parameters If you look carefully in preceding curl request for creating the index, it contains both settings and the mappings. We will see them now in detail one by one. Configuring settings Our settings contains two custom analyzers: nGram_analyzer and whitespace_analyzer. We have made custom whitespace_analyzer using whitespace tokenizer just for making due that all the tokens are indexed in lowercase and ascifolded form. Our main interest is nGram_analyzer, which contains a custom filter nGram_filter consisting following parameters: type: Specifies type of token filters which is nGram in our case. token_chars: Specifies what kind of characters are allowed in the generated tokens. Punctuations and special characters are generally removed from the token streams but in our example, we have intended to keep them. We have kept whitespace also so that a text which contains United States and a user searches for u s, United States still appears in the suggestion. min_gram and max_gram: These two attributes set the minimum and maximum length of substrings that will generated and added to the lookup table. For example, according to our settings for the index, the token India will generate following tokens: [ "di", "dia", "ia", "in", "ind", "indi", "india", "nd", "ndi", "ndia" ] Configuring mappings The document type of our index is locations and it has two fields, name and country. The most important thing to see is the way analyzers has been defined for name field which will be used for autosuggestion. For this field we have set index analyzer to our custom nGram_analyzer where the search analyzer is set to whitespace_analyzer. The index_analyzer parameter is no more supported from Elasticsearch version 5.0 onward. Also, if you want to configure search_analyzer property for a field, then you must configure analyzer property too the way we have shown in the preceding example. Summary In this article we focused on improving user search experience. We started with term and phrase suggesters and then covered search as you type that is, autocompletion feature which is implemented using completion suggester. We also saw the limitations of completion suggester in handling advanced queries and partial matching which further solved by implementing our custom completion using N-gram. Resources for Article: Further resources on this subject: Searching Your Data [article] Understanding Mesos Internals [article] Big Data Analysis (R and Hadoop) [article]
Read more
  • 0
  • 0
  • 2764

article-image-create-dynamic-tilemaps-duality-c-part-i
Lőrinc Serfőző
18 Nov 2016
6 min read
Save for later

Create Dynamic Tilemaps in Duality (C#) – Part I

Lőrinc Serfőző
18 Nov 2016
6 min read
Introduction This article guides you through the process of creating tilemaps and changing them at runtime in the Duality game engine. A Few Words about the Duality Game Engine Duality is an open source, Windows-only 2D game engine based on OpenGL and written entirely in C#. Despite being a non-commercial project, it is a rather mature product because it has been around since 2011. It has also proved to be capable of delivering professional games. Obviously, the feature bucket is somewhat limited compared to the major proprietary engines. On the other hand, this fact makes Duality an excellent learning tool, because the user often needs to implement logic on a lower level and also has the opportunity to inspect the source code of the engine. Thus I would recommend this engine to game developers with an intermediate skill level, who'd like to have a take on open source and are not afraid of learning by doing. This article does not describe the inner workings of Duality in detail because these are well-explained on the GitHub wiki of the project. If you are not familiar with the engine, please consider reviewing some of the key concepts described there. The repository containing the finished project is available on GitHub. Required tools First things first: Duality editor only supports Windows at the moment. You will need a C# compiler and a text editor. Visual Studio 2013 or higher is recommended, but Xamarin Studio/MonoDevelop also works. Duality itself is written in C# 4, but this tutorial uses language features introduced in C# 6. Creating a new project and initializing the Tilemap plugin Downloading the Duality binaries The latest binary package of the engine is available on its main site. Unlike most game engines, it does not need a setup process. In fact, every single project is a self-contained application, including the editor. This might seem strange at first, but this architecture makes version control, migration, and having different engine versions for different projects very easy. On top of this, it is also a self-building application. When you unzip the downloaded file, it contains only one executable: DualityEditor.exe. At its first run, it downloads the required packages from the main repository. It is based on the industry-standard NuGet format, and the packages are actually stored on NuGet.org. After accepting the license agreement and proceeding through the automated package install, the following screen should show up: Dualitor The usage of the editor application named Dualitor is intuitive; however, if you are not familiar with it yet, the quick start guide might be worth checking out. Adding the Tilemaps plugin to the project Duality has a modular architecture. Thus, every piece of distinct functionality is organized in modules, or, using the official terminology, plugins. Practically, every plugin is a .NET assembly that Duality loads and uses. There are two types of plugins: core plugins provide in-game functionality and services and are distributed with the game. On the other hand, editor plugins are used by Dualitor, and usually provide some sort of editing or content management functionality which aids the developer in creating the game. Now let's install the Tilemaps plugin. It's done via the Package Management window, which is available from the File menu. The following steps describe the installation process: Select the 'Online Repository' item from the 'View' dropdown. Select the Tilemaps (Editor) entry from the list. Since it depends on the Tilemaps (Core) package, the latter will be installed as well. Click on 'Install'. This downloads the two packages from the repository. Click on 'Apply'. Dualitor restarts and loads the plugins. Installing the Tilemaps plugin Tileset resource and Tilemap GameObject Creating the Tileset resource Resources are the asset holder objects in Duality. They can be simple imported types such as bitmap information, but some, like the Tileset resource we are going to use now, are generated by the editor. It contains information about rendering, collision, and autotiling properties, as well as a reference to the actual pixel data to be rendered. This pixel data is contained in another resource: the Pixmap. Let's start with a very simple tileset that consists of two solid color tiles, sized 32 x 32. Tileset Download this .png file, and drop it in the Project View. A new Pixmap is created. Next, create a new Tileset from the context menu of the Project View: Add Tileset At this point, the Tileset Resource does not know which Pixmap it should use. First, open the Tileset Editor window from the View menu on the menu bar. The same behavior is invoked when the Tileset is double-clicked in the Project View. The following steps describe how to establish this link: Select the newly created Tileset in the Tileset Editor. Add a new Visual Layer by clicking on the little green plus button. Select the 'Main Texture' layer, if it's not already selected. Drag and drop the Pixmap named 'tileset' into the 'SourceData' area. Since the default value of the tile size is already 32 x 32, there is no need to modify that. Click on 'Apply' to recompile the Resource. A quicker way to do this is by just selecting the 'Create Tileset' option from the Pixmap's context menu, but it is always nice to know what happens in the background. Linking the Pixmap and the Tileset resource Instantiating a Tilemap Duality, similar to many engines, uses a Scene-GameObject-Component model of object hierarchy. To display a tilemap in the scene, a GameObject is needed, with two components attached to it: a Tilemap and a TilemapRenderer. The former contains the actual tilemap information, that is, which tile is located at a specific position. Creating this GameObject is very easy; just drag and drop the Tileset Resource from the Project View to the Scene View. Switch back to the Camera #0 window from the Tileset Editor, and you should see a large green rectangle, which is the tilemap itself. Notice that a new GameObject called 'TilemapRenderer' appears in the Scene View, and it has 3 components attached: a Transform, a Tilemap, and a TilemapRenderer. To edit the Tilemap, the Camera #0 window mode has to be set to 'Tilemap Editor'. After doing that, a new window named 'Tile Palette' appears. The Tileset can be edited with multiple tools—feel free to experiment with them! Editing the Tilemap To be continued In the second part of this tutorial, some C# coding is involved. We will create a custom component to change the tilemap dynamically. Author Lőrinc Serfőző is a software engineer at Graphisoft, the company behind the BIM solution ArchiCAD. He is studying mechatronics engineering at the Budapest University of Technology and Economics, an interdisciplinary field between the more traditional mechanical engineering, electrical engineering, and informatics, and has quickly grown a passion toward software development. He is a supporter of open source and contributes to the C# and OpenGL-based Duality game engine, creating free plugins and tools for users.
Read more
  • 0
  • 0
  • 3129
Modal Close icon
Modal Close icon