Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
Packt
17 Feb 2016
14 min read
Save for later

"D-J-A-N-G-O... The D is silent." - Authentication in Django

Packt
17 Feb 2016
14 min read
The authentication module saves a lot of time in creating space for users. The following are the main advantages of this module: The main actions related to users are simplified (connection, account activation, and so on) Using this system ensures a certain level of security Access restrictions to pages can be done very easily (For more resources related to this topic, see here.) It's such a useful module that we have already used it without noticing. Indeed, access to the administration module is performed by the authentication module. The user we created during the generation of our database was the first user of the site. This article greatly alters the application we wrote earlier. At the end of this article, we will have: Modified our UserProfile model to make it compatible with the module Created a login page Modified the addition of developer and supervisor pages Added the restriction of access to connected users How to use the authentication module In this section, we will learn how to use the authentication module by making our application compatible with the module. Configuring the Django application There is normally nothing special to do for the administration module to work in our TasksManager application. Indeed, by default, the module is enabled and allows us to use the administration module. However, it is possible to work on a site where the web Django authentication module has been disabled. We will check whether the module is enabled. In the INSTALLED_APPS section of the settings.py file, we have to check the following line: 'django.contrib.auth', Editing the UserProfile model The authentication module has its own User model. This is also the reason why we have created a UserProfile model and not just User. It is a model that already contains some fields, such as nickname and password. To use the administration module, you have to use the User model on the Python33/Lib/site-package/django/contrib/auth/models.py file. We will modify the UserProfile model in the models.py file that will become the following: class UserProfile(models.Model): user_auth = models.OneToOneField(User, primary_key=True) phone = models.CharField(max_length=20, verbose_name="Phone number", null=True, default=None, blank=True) born_date = models.DateField(verbose_name="Born date", null=True, default=None, blank=True) last_connexion = models.DateTimeField(verbose_name="Date of last connexion", null=True, default=None, blank=True) years_seniority = models.IntegerField(verbose_name="Seniority", default=0) def __str__(self): return self.user_auth.username We must also add the following line in models.py: from django.contrib.auth.models import User In this new model, we have: Created a OneToOneField relationship with the user model we imported Deleted the fields that didn't exist in the user model The OneToOne relation means that for each recorded UserProfile model, there will be a record of the User model. In doing all this, we deeply modify the database. Given these changes and because the password is stored as a hash, we will not perform the migration with South. It is possible to keep all the data and do a migration with South, but we should develop a specific code to save the information of the UserProfile model to the User model. The code should also generate a hash for the password, but it would be long. To reset South, we must do the following: Delete the TasksManager/migrations folder and all the files contained in this folder Delete the database.db file To use the migration system, we have to use the following commands: manage.py schemamigration TasksManager --initial manage.py syncdb –migrate After the deletion of the database, we must remove the initial data in create_developer.py. We must also delete the URL developer_detail and the following line in index.html: <a href="{% url "developer_detail" "2" %}">Detail second developer (The second user must be a developer)</a><br /> Adding a user The pages that allow you to add a developer and supervisor no longer work because they are not compatible with our recent changes. We will change these pages to integrate our style changes. The view contained in the create_supervisor.py file will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse from django.contrib.auth.models import User def page(request): if request.POST: form = Form_supervisor(request.POST) if form.is_valid(): name = form.cleaned_data['name'] login = form.cleaned_data['login'] password = form.cleaned_data['password'] specialisation = form.cleaned_data['specialisation'] email = form.cleaned_data['email'] new_user = User.objects.create_user(username = login, email = email, password=password) # In this line, we create an instance of the User model with the create_user() method. It is important to use this method because it can store a hashcode of the password in database. In this way, the password cannot be retrieved from the database. Django uses the PBKDF2 algorithm to generate the hash code password of the user. new_user.is_active = True # In this line, the is_active attribute defines whether the user can connect or not. This attribute is false by default which allows you to create a system of account verification by email, or other system user validation. new_user.last_name=name # In this line, we define the name of the new user. new_user.save() # In this line, we register the new user in the database. new_supervisor = Supervisor(user_auth = new_user, specialisation=specialisation) # In this line, we create the new supervisor with the form data. We do not forget to create the relationship with the User model by setting the property user_auth with new_user instance. new_supervisor.save() return HttpResponseRedirect(reverse('public_empty')) else: return render(request, 'en/public/create_supervisor.html', {'form' : form}) else: form = Form_supervisor() form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form' : form}) class Form_supervisor(forms.Form): name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label = "Login") email = forms.EmailField(label = "Email") specialisation = forms.CharField(label = "Specialisation") password = forms.CharField(label = "Password", widget = forms.PasswordInput) password_bis = forms.CharField(label = "Password", widget = forms.PasswordInput) def clean(self): cleaned_data = super (Form_supervisor, self).clean() password = self.cleaned_data.get('password') password_bis = self.cleaned_data.get('password_bis') if password and password_bis and password != password_bis: raise forms.ValidationError("Passwords are not identical.") return self.cleaned_data The create_supervisor.html template remains the same, as we are using a Django form. You can change the page() method in the create_developer.py file to make it compatible with the authentication module (you can refer to downloadable Packt code files for further help): def page(request): if request.POST: form = Form_inscription(request.POST) if form.is_valid(): name = form.cleaned_data['name'] login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] new_user = User.objects.create_user(username = login, password=password) new_user.is_active = True new_user.last_name=name new_user.save() new_developer = Developer(user_auth = new_user, supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) else: form = Form_inscription() return render(request, 'en/public/create_developer.html', {'form' : form}) We can also modify developer_list.html with the following content: {% extends "base.html" %} {% block title_html %} Developer list {% endblock %} {% block h1 %} Developer list {% endblock %} {% block article_content %} <table> <tr> <td>Name</td> <td>Login</td> <td>Supervisor</td> </tr> {% for dev in object_list %} <tr> <!-- The following line displays the __str__ method of the model. In this case it will display the username of the developer --> <td><a href="">{{ dev }}</a></td> <!-- The following line displays the last_name of the developer --> <td>{{ dev.user_auth.last_name }}</td> <!-- The following line displays the __str__ method of the Supervisor model. In this case it will display the username of the supervisor --> <td>{{ dev.supervisor }}</td> </tr> {% endfor %} </table> {% endblock %} Login and logout pages Now that you can create users, you must create a login page to allow the user to authenticate. We must add the following URL in the urls.py file: url(r'^connection$', 'TasksManager.views.connection.page', name="public_connection"), You must then create the connection.py view with the following code: from django.shortcuts import render from django import forms from django.contrib.auth import authenticate, login # This line allows you to import the necessary functions of the authentication module. def page(request): if request.POST: # This line is used to check if the Form_connection form has been posted. If mailed, the form will be treated, otherwise it will be displayed to the user. form = Form_connection(request.POST) if form.is_valid(): username = form.cleaned_data["username"] password = form.cleaned_data["password"] user = authenticate(username=username, password=password) # This line verifies that the username exists and the password is correct. if user: # In this line, the authenticate function returns None if authentication has failed, otherwise it returns an object that validates the condition. login(request, user) # In this line, the login() function allows the user to connect. else: return render(request, 'en/public/connection.html', {'form' : form}) else: form = Form_connection() return render(request, 'en/public/connection.html', {'form' : form}) class Form_connection(forms.Form): username = forms.CharField(label="Login") password = forms.CharField(label="Password", widget=forms.PasswordInput) def clean(self): cleaned_data = super(Form_connection, self).clean() username = self.cleaned_data.get('username') password = self.cleaned_data.get('password') if not authenticate(username=username, password=password): raise forms.ValidationError("Wrong login or passwsord") return self.cleaned_data You must then create the connection.html template with the following code: {% extends "base.html" %} {% block article_content %} {% if user.is_authenticated %} <-- This line checks if the user is connected.--> <h1>You are connected.</h1> <p> Your email : {{ user.email }} <-- In this line, if the user is connected, this line will display his/her e-mail address.--> </p> {% else %} <!-- In this line, if the user is not connected, we display the login form.--> <h1>Connexion</h1> <form method="post" action="{{ public_connection }}"> {% csrf_token %} <table> {{ form.as_table }} </table> <input type="submit" class="button" value="Connection" /> </form> {% endif %} {% endblock %} When the user logs in, Django will save his/her data connection in session variables. This example has allowed us to verify that the audit login and password was transparent to the user. Indeed, the authenticate() and login() methods allow the developer to save a lot of time. Django also provides convenient shortcuts for the developer such as the user.is_authenticated attribute that checks if the user is logged in. Users prefer when a logout link is present on the website, especially when connecting from a public computer. We will now create the logout page. First, we need to create the logout.py file with the following code: from django.shortcuts import render from django.contrib.auth import logout def page(request): logout(request) return render(request, 'en/public/logout.html') In the previous code, we imported the logout() function of the authentication module and used it with the request object. This function will remove the user identifier of the request object, and delete flushes their session data. When the user logs out, he/she needs to know that the site was actually disconnected. Let's create the following template in the logout.html file: {% extends "base.html" %} {% block article_content %} <h1>You are not connected.</h1> {% endblock %} Restricting access to the connected members When developers implement an authentication system, it's usually to limit access to anonymous users. In this section, we'll see two ways to control access to our web pages. Restricting access in views The authentication module provides simple ways to prevent anonymous users from accessing some pages. Indeed, there is a very convenient decorator to restrict access to a view. This decorator is called login_required. In the example that follows, we will use the designer to limit access to the page() view from the create_developer module in the following manner: First, we must import the decorator with the following line: from django.contrib.auth.decorators import login_required Then, we will add the decorator just before the declaration of the view: @login_required def page(request): # This line already exists. Do not copy it. With the addition of these two lines, the page that lets you add a developer is only available to the logged-in users. If you try to access the page without being connected, you will realize that it is not very practical because the obtained page is a 404 error. To improve this, simply tell Django what the connection URL is by adding the following line in the settings.py file: LOGIN_URL = 'public_connection' With this line, if the user tries to access a protected page, he/she will be redirected to the login page. You may have noticed that if you're not logged in and you click the Create a developer link, the URL contains a parameter named next. The following is the screen capture of the URL: This parameter contains the URL that the user tried to consult. The authentication module redirects the user to the page when he/she connects. To do this, we will modify the connection.py file we created. We add the line that imports the render() function to import the redirect() function: from django.shortcuts import render, redirect To redirect the user after they log in, we must add two lines after the line that contains the code login (request, user). There are two lines to be added: if request.GET.get('next') is not None: return redirect(request.GET['next']) This system is very useful when the user session has expired and he/she wants to see a specific page. Restricting access to URLs The system that we have seen does not simply limit access to pages generated by CBVs. For this, we will use the same decorator, but this time in the urls.py file. We will add the following line to import the decorator: from django.contrib.auth.decorators import login_required We need to change the line that corresponds to the URL named create_project: url (r'^create_project$', login_required(CreateView.as_view(model=Project, template_name="en/public/create_project.html", success_url = 'index')), name="create_project"), The use of the login_required decorator is very simple and allows the developer to not waste too much time. Summary In this article, we modified our application to make it compatible with the authentication module. We created pages that allow the user to log in and log out. We then learned how to restrict access to some pages for the logged in users. To learn more about Django, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Django By Example (https://www.packtpub.com/web-development/django-example) Learning Django Web Development (https://www.packtpub.com/web-development/learning-django-web-development) Resources for Article: Further resources on this subject: Code Style in Django [Article] Adding a developer with Django forms [Article] Enhancing Your Blog with Advanced Features [Article]
Read more
  • 0
  • 0
  • 10521

article-image-common-grunt-plugins
Packt
07 Mar 2016
9 min read
Save for later

Common Grunt Plugins

Packt
07 Mar 2016
9 min read
In this article, by Douglas Reynolds author of the book Learning Grunt, you will learn about Grunt plugins as they are core of the Grunt functionality and are an important aspect of Grunt as plugins are what we use in order to design an automated build process. (For more resources related to this topic, see here.) Common Grunt plugins and their purposes At this point, you should be asking yourself, what plugins can benefit me the most and why? Once you ask these questions, you may find that a natural response will be to ask further questions such as "what plugins are available?" This is exactly the intended purpose of this section, to introduce useful Grunt plugins and describe their intended purpose. contrib-watch This is, in the author's opinion, probably the most useful plugin available. The contrib-watch plugin responds to changes in files defined by you and runs additional tasks upon being triggered by the changed file events. For example, let's say that you make changes to a JavaScript file. When you save, and these changes are persisted, contrib-watch will detect that the file being watched has changed. An example workflow might be to make and save changes in a JavaScript file, then run Lint on the file. You might paste the code into a Lint tool, such as http://www.jslint.com/, or you might run an editor plugin tool on the file to ensure that your code is valid and has no defined errors. Using Grunt and contrib-watch, you can configure contrib-watch to automatically run a Grunt linting plugin so that every time you make changes to your JavaScript files, they are automatically lined. Installation of contrib-watch is straight forward and accomplished using the following npm-install command: npm install grunt-contrib-watch –save-dev The contrib-watch plugin will now be installed into your node-module directory. This will be located in the root of your project, you may see the Angular-Seed project for an example. Additionally, contrib-watch will be registered in package.json, you will see something similar to the following in package.json when you actually run this command: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-watch": "~0.4.0" } Notice the tilde character (~) in the grunt-contrib-watch: 0.5.3 line, the tilde actually specifies that the most recent minor version should be used when updating. Therefore, for instance, if you updated your npm package for the project, it would use 0.5.4 if available; however, it would not use 0.6.x as that is a higher minor version. There is also a caret character (^) that you may see. It will allow updates matching the most recent major version. In the case of the 0.5.3 example, 0.6.3 will be allowed, while 1.x versions are not allowed. At this point, contrib-watch is ready to be configured into your project in a gruntfile, we will look more into the gruntfile later. It should be noted that this, and many other tasks, can be run manually. In the case of contrib-watch, once installed and configured, you can run the grunt watch command to start watching the files. It will continue to watch until you end your session. The contrib-watch has some useful options. While we won't cover all of the available options, the following are some notable options that you should be aware of. Make sure to review the documentation for the full listing of options: Options.event: This will allow you to configure contrib-watch to only trigger when certain event types occur. The available types are all, changed, added, and deleted. You may configure more than one type if you wish. The all will trigger any file change, added will respond to new files added, and deleted will be triggered on removal of a file. Options.reload: This will trigger the reload of the watch task when any of the watched files change. A good example of this is when the file in which the watch is configured changes, it is called gruntfile.js. This will reload the gruntfile and restart contrib-watch, watching the new version of gruntfile.js. Options.livereload: This is different than reload, therefore, don't confuse the two. Livereload starts a server that enables live reloading. What this means is that when files are changed, your server will automatically update with the changed files. Take, for instance, a web application running in a browser. Rather than saving your files and refreshing your browser to get the changed files, livereload automatically reloads your app in the browser for you. contrib-jshint The contrib-jshint plugin is to run automated JavaScript error detection and will help with identifying potential problems in your code, which may surface during the runtime. When the plugin is run, it will scan your JavaScript code and issue warnings on the preconfigured options. There are a large number of error messages that jshint might provide and it can be difficult at times to understand what exactly a particular message might be referring to. Some examples are shown in the following: The array literal notation [] is preferable The {a} is already defined Avoid arguments.{a} Bad assignment Confusing minuses The list goes on and there are resources such as http://jslinterrors.com/, whose purpose is to help you understand what a particular warning/error message means. Installation of contrib-jshint follows the same pattern as other plugins, using npm to install the plugin in your project, as shown in the following: npm install grunt-contrib-jshint --save-dev This will install the contrib-jshint plugin in your project's node-modules directory and register the plugin in your devDependencies section of the package.json file at the root of your project. It will be similar to the following: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-jshint": "~0.4.5" } Similar to the other plugins, you may manually run contrib-jshint using the grunt jshint command. The contrib-jshint is jshint, therefore, any of the options available in jshint may be passed to contrib-jshint. Take a look at http://jshint.com/docs/options/ for a complete listing of the jshint options. Options are configured in the gruntfile.js file that we will cover in detail later in this book. Some examples of options are as follows: curly: This enforces that curly braces are used in code blocks undef: This ensures that all the variables have been declared maxparams: This checks to make sure that the number of arguments in a method does not exceed a certain limit The contrib-jshint allows you to configure the files that will be linted, the order in which the linting will occur, and even control linting before and after the concatenation. Additionally, contrib-jshint allows you to suppress warnings in the configuration options using the ignore_warning option. contrib-uglify Compression and minification are important for reducing the file sizes and contributing for better loading times to improve the performance. The contrib-uglify plugin provides the compression and minification utility by optimizing the JavaScript code and removing unneeded line breaks and whitespaces. It does this by parsing JavaScript and outputting regenerated, optimized, and code with shortened variable names for example. The contrib-uglify plugin is installed in your project using the npm install command, just as you will see with all the other Grunt plugins, as follows: npm install grunt-contrib-uglify --save-dev After you run this command, contrib-uglify will be installed in your node-modules directory at the root of your application. The plugin will also be registered in your devDependencies section of package.json. You should see something similar to the following in devDependencies: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-uglify": "~0.4.0" } In addition to running as an automated task, contrib-uglify plugin may be run manually by issuing the grunt uglify command. The contrib-uglify plugin is configured to process specific files as defined in the gruntfile.js configuration file. Additionally, contrib-uglify will have defined output destination files that will be created for the processed minified JavaScript. There is also a beautify option that can be used to revert the minified code, should you wish to easily debug your JavaScript. A useful option that is available in conntrib-uglify is banners. Banners allow you to configure banner comments to be added to the minified output files. For example, a banner could be created with the current date and time, author, version number, and any other important information that should be included. You may reference your package.json file in order to get information, such as the package name and version, directly from the package.json configuration file. Another notable option is the ability to configure directory-level compiling of files. You achieve this through the configuration of the files option to use wildcard path references with file extension, such as **/*.js. This is useful when you want to minify all the contents in a directory. contrib-less The contrib-less is a plugin that compiles LESS files into CSS files. The LESS provides extensibility to standard CSS by allowing variables, mixins (declaring a group of style declarations at once that can be reused anywhere in the stylesheet), and even conditional logic to manage styles throughout the document. Just as with other plugins, contrib-less is installed to your project using the npm install command with the following command: npm install grunt-contrib-less –save-dev The npm install will add contrib-less to your node-modules directory, located at the root of your application. As we are using --save-dev, the task will be registered in devDependencies of package.json. The registration will look something similar to the following: "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-less": "~0.4.5" } Typical of Grunt tasks, you may also run contrib-less manually using the grunt less command. The contrib-less will be configured using the path and file options to define the location of source and destination output files. The contrib-less plugin can also be configured with multiple environment-type options, for example dev, test, and production, in order to apply different options that may be needed for different environments. Some typical options used in contrib-less include the following: paths: These are the directories that should be scanned compress: This shows whether to compress output to remove the whitespace plugins: This is the mechanism for including additional plugins in the flow of processing banner: This shows the banner to use in the compiled destination files There are several more options that are not listed here, make sure to refer to the documentation for the full listing of contrib-less options and example usage. Summary In this article we have covered some basic grunt plugins. Resources for Article: Further resources on this subject: Grunt in Action [article] Optimizing JavaScript for iOS Hybrid Apps [article] Welcome to JavaScript in the full stack [article]
Read more
  • 0
  • 0
  • 10490

article-image-working-aspnet-web-api
Packt
13 Dec 2013
5 min read
Save for later

Working with ASP.NET Web API

Packt
13 Dec 2013
5 min read
(For more resources related to this topic, see here.) The ASP.NET Web API is a framework that you can make use of to build web services that use HTTP as the protocol. You can use the ASP.NET Web API to return data based on the data requested by the client, that is, you can return JSON or XML as the format of the data. Layers of an application The ASP.NET Framework runs on top of the managed environment of the .NET Framework. The Model, View, and Controller (MVC) architectural pattern is used to separate the concerns of an application to facilitate testing, ease the process of maintenance of the application's code, and to provide better support for change. The model represents the application's data and the business objects; the view is the presentation layer component, and the controller binds the model and the view together. The following figure illustrates the components of Model View Architecture: The MVC Architecture The ASP.NET Web API architecture The ASP.NET Web API is a lightweight web-based architecture that uses HTTP as the application protocol. Routing in the ASP.NET Web API works a bit differently compared to the way it works in ASP.NET MVC. The basic difference between routing in MVC and routing in a Web API is that, Web API uses the HTTP method, and not the URI path, to select the action. The Web API Framework uses a routing table to determine which action is to be invoked for a particular request. You need to specify the routing parameters in the WebApiConfig.cs file that resides in the App_Start directory. Here's an example that shows how routing is configured: routes.MapHttpRoute(    name: "Packt API Default",     routeTemplate: "api/{controller}/{id}",     defaults: new { id = RouteParameter.Optional } ); The following code snippet illustrates how routing is configured by action names: routes.MapHttpRoute(     name: "PacktActionApi",     routeTemplate: "api/{controller}/{action}/{id}",     defaults: new { id = RouteParameter.Optional } ); The ASP.NET Web API generates structured data such as JSON and XML as responses. It can route the incoming requests to the actions based on HTTP verbs and not only action names. Also, the ASP.NET Web API can be hosted outside of the ASP.NET runtime environment and the IIS Web Server context. Routing in ASP.NET Web API Routing in the ASP.NET Web API is very much the same as in the ASP.NET MVC. The ASP.NET Web API routes URLs to a controller. Then, the control is handed over to the action that corresponds to the HTTP verb of the request message. Note that the default route template for an ASP.NET Web API project is {controller}/{id}—here the {id} parameter is optional. Also, the ASP.NET Web API route templates may optionally include an {action} parameter. It should be noted that unlike the ASP.NET MVC, URLs in the ASP.NET Web API cannot contain complex types. It should also be noted that complex types must be present in the HTTP message body, and that there can be one, and only one, complex type in the HTTP message body. Note that the ASP.NET MVC and the ASP.NET Web API are two distinctly separate frameworks which adhere to some common architectural patterns. In the ASP.NET Web API framework, the controller handles all HTTP requests. The controller comprises a collection of action methods—an incoming request to the Web API framework, the request is routed to the appropriate action. Now, the framework uses a routing table to determine the action method to be invoked when a request is received. Here is an example: routes.MapHttpRoute(     name: "Packt Web API",     routeTemplate: "api/{controller}/{id}",     defaults: new { id = RouteParameter.Optional } ); Refer to the following UserController class. public class UserController <UserAuthentication>: BaseApiController<UserAuthentication> {     public void GetAllUsers() { }     public IEnumerable<User> GetUserById(int id) { }     public HttpResponseMessage DeleteUser(int id){ } } The following table illustrates the HTTP method and the corresponding URI, Actions, and so on: HTTP Method URI Action Parameter GET api/users GetAllUsers None GET api/users/1 GetUserByID 1 POST api/users     DELETE api/users/3 DeleteUser 3 The Web API Framework matches the segments in the URI path to the template. The following steps are performed: The URI is matched to a route template. The respective controller is selected. The respective action is selected. The IHttpControllerSelector.SelectController method selects the controller, takes an HttpRequestMessage instance and returns an HttpControllerDescriptor. After the controller has been selected, the Web API framework selects the action by invoking the IHttpActionSelector.SelectAction method. This method in turn accepts HttpControllerContext and returns HttpActionDescriptor. You can also explicitly specify the HTTP method for an action by decorating the action method using the HttpGet, HttpPut, HttpPost, or HttpDelete attributes. Here is an example: public class UsersController : ApiController {     [HttpGet]     public User FindUser(id) {} } You can also use the AcceptVerbs attribute to enable HTTP methods other than GET, PUT, POST, and DELETE. Here is an example: public class UsersController : ApiController {     [AcceptVerbs("GET", "HEAD")]     public User FindUser(id) { } } You can also define route by an action name. Here is an example: routes.MapHttpRoute(     name: "PacktActionApi",     routeTemplate: "api/{controller}/{action}/{id}",     defaults: new { id = RouteParameter.Optional } ); You can also override the action name by using the ActionName attribute. The following code snippet illustrates two actions: one that supports GET and the other that supports POST: public class UsersController : ApiController {     [HttpGet]     [ActionName("Token")]     public HttpResponseMessage GetToken(int userId);     [HttpPost]     [ActionName("Token")]     public void AddNewToken(int userId); }  
Read more
  • 0
  • 0
  • 10474

article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,bill@williamrice.net,Bill,Binky,open-enrollmentmoodlers moodler_2,rose@williamrice.net,Rose,Krial,open-enrollmentmoodlers moodler_3,jeff@williamrice.net,Jeff,Marco,open-enrollmentmoodlers moodler_4,dave@williamrice.net,Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,bill@williamrice.net,Bill,Binky,history101,odds,science101 moodler_2,rose@williamrice.net,Rose,Krial,history101,even,science101 moodler_3,jeff@williamrice.net,Jeff,Marco,history101,odds,science101 moodler_4,dave@williamrice.net,Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 10261

article-image-downloading-and-setting-bootstrap
Packt
30 Jan 2013
4 min read
Save for later

Downloading and setting up Bootstrap

Packt
30 Jan 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Twitter Bootstrap is more than a set of code. It is an online community. To get started, you will do well to familiarize yourself with Twitter Bootstrap's home base: http://twitter.github.com/bootstrap/ Here you'll find the following: The documentation: If this is your first visit, grab a cup of coffee and spend some time perusing the pages, scanning the components, reading the details, and soaking it in. (You'll see this is going to be fun.) The download button: You can get the latest and greatest versions of the Twitter Bootstrap's CSS, JavaScript plugins, and icons, compiled and ready for action, coming to you in a convenient ZIP folder. This is where we'll start. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you. How to do it… Whatever your experience level, as promised, I'll walk you through all the necessary steps. Here goes! Go to the Bootstrap homepage: http://twitter.github.com/bootstrap/ Click on the large Download Bootstrap button. Locate the download file and unzip or extract it. You should get a folder named simply bootstrap. Inside this folder you should find the folders and files shown in thefollowing screenshot: From the homepage, click on the main navigation item: Get started. Scroll down, or use the secondary navigation, to navigate to the heading: Examples. The direct link is: http://twitter.github.com/bootstrap/getting-started. html#examples Right-click and download the leftmost example, labeled Basic Marketing Site. You'll see that it is an HTML file, named hero.html Save (or move) it to your main bootstrap folder, right alongside the folders named css, img, and js. Rename the file index.html (a standard name for what will become our homepage). You should now see something similar to the following screenshot: Next, we need to update the links to the stylesheets Why? When you downloaded the starter template file, you changed the relationship between the file and its stylesheets. We need to let it know where to find the stylesheets in this new file structure. Open index.html (formerly, hero.html) in your code editor. Need a code editor? Windows users: You might try Notepad++ (http://notepadplus-plus.org/download/) Mac users: Consider TextWrangler (http://www. barebones.com/products/textwrangler/)   Find these lines near the top of the file (lines 11-18 in version 2.0.2): Update the href attributes in both link tags to read as follows: Save your changes! You're set to go! Open it up in your browser! (Double-click on index.html.) You should see something like this: Congratulations! Your first Bootstrap site is underway. Problems? Don't worry. If your page doesn't look like this yet, let me help you spot the problem. Revisit the steps above and double-check a couple of things: Are your folders and files in the right relationship? (see step 3 as detailed previosuly) In your index.html, did you update the href attributes in both stylesheet links? (These should be lines 11 and 18 as of Twitter Bootstrap version 2.1.0.) There's more… Of course, this is not the only way you could organize your files. Some developers prefer to place stylesheets, images, and JavaScript files all within a larger folder named assets or library. The organization method I've presented is recommended by the developers who contribute to the HTML5 Boilerplate. One advantage of this approach is that it reduces the length of the paths to our site assets. Thus, whereas others might have a path to a background image such as this: url('assets/img/bg.jpg'); In the organization scheme I've recommended it will be shorter: url('img/bg.jpg'); This is not a big deal for a single line of code. However, when you consider that there will be many links to stylesheets, JavaScript files, and images running throughout your site files, when we reduce each path a few characters, this can add up. And in a world where speed matters, every bit counts. Shorter paths save characters, reduce file size, and help support faster web browsing. Summary This article gave us a quick introduction to the Twitter Bootstrap. We've got a fair idea as to how to download and set up our Bootstrap. By following these simple steps, we can easily create our first Bootstrap site. Resources for Article : Further resources on this subject: Starting Up Tomcat 6: Part 1 [Article] Build your own Application to access Twitter using Java and NetBeans: Part 2 [Article] Integrating Twitter with Magento [Article]
Read more
  • 0
  • 0
  • 10132

article-image-deploying-your-own-server
Packt
30 Sep 2015
16 min read
Save for later

Deploying on your own server

Packt
30 Sep 2015
16 min read
In this article by Jack Stouffer, the author of the book Mastering Flask, you will learn how to deploy and host your application on the different options available, and the advantages and disadvantages related to them. The most common way to deploy any web app is to run it on a server that you have control over. Control in this case means access to the terminal on the server with an administrator account. This type of deployment gives you the most amount of freedom out of the other choices as it allows you to install any program or tool you wish. This is in contrast to other hosting solutions where the web server and database are chosen for you. This type of deployment also happens to be the least expensive option. The downside to this freedom is that you take the responsibility of keeping the server up, backing up user data, keeping the software on the server up to date to avoid security issues, and so on. Entire books have been written on good server management, so if this is not a responsibility that you believe you or your company can handle, it would be best if you choose one of the other deployment options. This section will be based on a Debian Linux-based server, as Linux is far and away the most popular OS for running web servers, and Debian is the most popular Linux distro (a particular combination of software and the Linux kernel released as a package). Any OS with Bash and a program called SSH (which will be introduced in the next section) will work for this article, the only differences will be the command-line programs to install software on the server. (For more resources related to this topic, see here.) Each of these web servers will use a protocol named Web Server Gateway Interface (WSGI), which is a standard designed to allow Python web applications to easily communicate with web servers. We will never directly work with WSGI. However, most of the web server interfaces we will be using will have WSGI in their name, and it can be confusing if you don't know what the name is. Pushing code to your server with fabric To automate the process of setting up and pushing our application code to the server, we will use a Python tool called fabric. Fabric is a command-line program that reads and executes Python scripts on remote servers using a tool called SSH. SSH is a protocol that allows a user of one computer to remotely log in to another computer and execute commands on the command line, provided that the user has an account on the remote machine. To install fabric, we will use pip: $ pip install fabric Fabric commands are collections of command-line programs to be run on the remote machine's shell, in this case, Bash. We are going to make three different commands: one to run our unit tests, one to set up a brand new server to our specifications, and one to have the server update its copy of the application code with git. We will store these commands in a new file at the root of our project directory called fabfile.py. As it's the easiest to create, let's make the test command first: from fabric.api import local def test(): local('python -m unittest discover') To run this function from the command line, we can use fabric's command-line interface by passing the name of the command to run: $ fab test [localhost] local: python -m unittest discover ..... --------------------------------------------------------------------- Ran 5 tests in 6.028s OK Fabric has three main commands: local, run, and sudo. The local function, as seen in the preceding function, runs commands on the local computer. The run and sudo functions run commands on a remote machine, but sudo runs commands as an administrator. All of these functions notify fabric if the command ran successfully or not. If a command didn't run successfully, meaning that our tests failed in this case, any other commands in the function will not be run. This is useful for our commands because it allows us to force ourselves not to push any code to the server that does not pass our tests. Now we need to create the command to set up a new server from scratch. What this command will do is install the software our production environment needs as well as downloads the code from our centralized git repository. It will also create a new user that will act as the runner of the web server as well as the owner of the code repository. Do not run your webserver or have your code deployed by the root user. This opens your application to a whole host of security vulnerabilities. This command will differ based on your operating system, and we will be adding to this command in the rest of the article based on what server you choose: from fabric.api import env, local, run, sudo, cd env.hosts = ['deploy@[your IP]'] def upgrade_libs(): sudo("apt-get update") sudo("apt-get upgrade") def setup(): test() upgrade_libs() # necessary to install many Python libraries sudo("apt-get install -y build-essential") sudo("apt-get install -y git") sudo("apt-get install -y python") sudo("apt-get install -y python-pip") # necessary to install many Python libraries sudo("apt-get install -y python-all-dev") run("useradd -d /home/deploy/ deploy") run("gpasswd -a deploy sudo") # allows Python packages to be installed by the deploy user sudo("chown -R deploy /usr/local/") sudo("chown -R deploy /usr/lib/python2.7/") run("git config --global credential.helper store") with cd("/home/deploy/"): run("git clone [your repo URL]") with cd('/home/deploy/webapp'): run("pip install -r requirements.txt") run("python manage.py createdb") There are two new fabric features in this script. One is the env.hosts assignment, which tells fabric the user and IP address of the machine it should be logging in to. Second, there is the cd function used in conjunction with the with keyword, which executes any functions in the context of that directory instead of the home directory of the deploy user. The line that modifies the git configuration is there to tell git to remember your repository's username and password, so you do not have to enter it every time you wish to push code to the server. Also, before the server is set up, we make sure to update the server's software to keep the server up to date. Finally, we have the function to push our new code to the server. In time, this command will also restart the web server and reload any configuration files that come from our code. But this depends on the server you choose, so this is filled out in the subsequent sections: def deploy(): test() upgrade_libs() with cd('/home/deploy/webapp'): run("git pull") run("pip install -r requirements.txt") So, if we were to begin working on a new server, all we would need to do to set it up is to run the following commands: $ fabric setup $ fabric deploy Running your web server with supervisor Now that we have automated our updating process, we need some program on the server to make sure that our web server, and database if you aren't using SQLite, is running. To do this, we will use a simple program called supervisor. All that supervisor does is automatically run command-line programs in background processes and allows you to see the status of running programs. Supervisor also monitors all of the processes its running, and if the process dies, it tries to restart it. To install supervisor, we need to add it to the setup command in our fabfile.py: def setup(): … sudo("apt-get install -y supervisor") To tell supervisor what to do, we need to create a configuration file and then copy it to the /etc/supervisor/conf.d/ directory of our server during the deploy fabric command. Supervisor will load all of the files in this directory when it starts and attempt to run them. In a new file in the root of our project directory named supervisor.conf, add the following: [program:webapp] command= directory=/home/deploy/webapp user=deploy [program:rabbitmq] command=rabbitmq-server user=deploy [program:celery] command=celery worker -A celery_runner directory=/home/deploy/webapp user=deploy This is the bare minimum configuration needed to get a web server up and running. But, supervisor has a lot more configuration options. To view all of the customizations, go to the supervisor documentation at http://supervisord.org/. This configuration tells supervisor to run a command in the context of /home/deploy/webapp under the deploy user. The right hand of the command value is empty because it depends on what server you are running and will be filled in for each section. Now we need to add a sudo call in the deploy command to copy this configuration file to the /etc/supervisor/conf.d/ directory: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp supervisord.conf /etc/supervisor/conf.d/webapp.conf") sudo('service supervisor restart') A lot of projects just create the files on the server and forget about them, but having the configuration file stored in our git repository and copied on every deployment gives several advantages. First, this means that it easy to revert changes if something goes wrong using git. Second, it means that we don't have to log in to our server in order to make changes to the files. Don't use the Flask development server in production. Not only it fails to handle concurrent connections, but it also allows arbitrary Python code to be run on your server. Gevent The simplest option to get a web server up and running is to use a Python library called gevent to host your application. Gevent is a Python library that adds an alternative way of doing concurrent programming outside of the Python threading library called coroutines. Gevent has an interface for running WSGI applications that is both simple and has good performance. A simple gevent server can easily handle hundreds of concurrent users, which is more in number than 99 percent of websites on the Internet will ever have. The downside to this option is that its simplicity means a lack of configuration options. There is no way, for example, to add rate limiting to the server or to add HTTPS traffic. This deployment option is purely for sites that you don't expect to receive a huge amount of traffic. Remember YAGNI (short for You Aren't Gonna Need It); only upgrade to a different web server if you really need to. Coroutines are a bit outside of the scope of this book, so a good explanation can be found at https://en.wikipedia.org/wiki/Coroutine. To install gevent, we will use pip: $ pip install gevent In a new file in the root of the project directory named gserver.py, add the following: from gevent.wsgi import WSGIServer from webapp import create_app app = create_app('webapp.config.ProdConfig') server = WSGIServer(('', 80), app) server.serve_forever() To run the server with supervisor, just change the command value to the following: [program:webapp] command=python gserver.py directory=/home/deploy/webapp user=deploy Now when you deploy, gevent will be automatically installed for you by running your requirements.txt on every deployment, that is, if you are properly pip freeze–ing after every new dependency is added. Tornado Tornado is another very simple way to deploy WSGI apps purely with Python. Tornado is a web server that is designed to handle thousands of simultaneous connections. If your application needs real-time data, Tornado also supports websockets for continuous, long-lived connections to the server. Do not use Tornado in production on a Windows server. The Windows version of Tornado is not only much slower, but it is considered beta quality software. To use Tornado with our application, we will use Tornado's WSGIContainer in order to wrap the application object to make it Tornado compatible. Then, Tornado will start to listen on port 80 for requests until the process is terminated. In a new file named tserver.py, add the following: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from webapp import create_app app = WSGIContainer(create_app("webapp.config.ProdConfig")) http_server = HTTPServer(app) http_server.listen(80) IOLoop.instance().start() To run the Tornado with supervisor, just change the command value to the following: [program:webapp] command=python tserver.py directory=/home/deploy/webapp user=deploy Nginx and uWSGI If you need more performance or customization, the most popular way to deploy a Python web application is to use the web server Nginx as a frontend for the WSGI server uWSGI by using a reverse proxy. A reverse proxy is a program in networks that retrieves contents for a client from a server as if they returned from the proxy itself as shown in the following figure: Nginx and uWSGI are used in this way because we get the power of the Nginx frontend while having the customization of uWSGI. Nginx is a very powerful web server that became popular by providing the best combination of speed and customization. Nginx is consistently faster than other web severs, such as Apache httpd, and has native support for WSGI applications. The way it achieves this speed is several good architecture decisions as well as the decision early on that they were not going to try to cover a large amount of use cases like Apache does. Having a smaller feature set makes it much easier to maintain and optimize the code. From a programmer's perspective, it is also much easier to configure Nginx, as there is no giant default configuration file (httpd.conf) that needs to be overridden with .htaccess files in each of your project directories. One downside is that Nginx has a much smaller community than Apache, so if you have an obscure problem, you are less likely to be able to find answers online. Also, it's possible that a feature that most programmers are used to in Apache isn't supported in Nginx. uWSGI is a web server that supports several different types of server interfaces, including WSGI. uWSGI handles severing the application content as well as things such as load balancing traffic across several different processes and threads. To install uWSGI, we will use pip in the following way: $ pip install uwsgi In order to run our application, uWSGI needs a file with an accessible WSGI application. In a new file named wsgi.py in the top level of the project directory, add the following: from webapp import create_app app = create_app("webapp.config.ProdConfig") To test uWSGI, we can run it from the command line with the following: $ uwsgi --socket 127.0.0.1:8080 --wsgi-file wsgi.py --callable app --processes 4 --threads 2 If you are running this on your server, you should be able to access port 8080 and see your app (if you don't have a firewall that is). What this command does is load the app object from the wsgi.py file and makes it accessible from localhost on port 8080. It also spawns four different processes with two threads each, which are automatically load balanced by a master process. This amount of processes is the overkill for the vast, vast majority of websites. To start off, use a single process with two threads and scale up from there. Instead of adding all of the configuration options on the command line, we can create a text file to hold our configuration, which brings the same benefits for configuration that were listed in the section on supervisor. In a new file in the root of the project directory named uwsgi.ini, add the following: [uwsgi] socket = 127.0.0.1:8080 wsgi-file = wsgi.py callable = app processes = 4 threads = 2 uWSGI supports hundreds of configuration options as well as several official and unofficial plugins. To leverage the full power of uWSGI, you can explore the documentation at http://uwsgi-docs.readthedocs.org/. Let's run the server now from supervisor: [program:webapp] command=uwsgi uwsgi.ini directory=/home/deploy/webapp user=deploy We also need to install Nginx during the setup function: def setup(): … sudo("apt-get install -y nginx") Because we are installing Nginx from the OS's package manager, the OS will handle running Nginx for us. At the time of writing, the Nginx version in the official Debian package manager is several years old. To install the most recent version, follow the instructions here: http://wiki.nginx.org/Install. Next, we need to create an Nginx configuration file and then copy it to the /etc/nginx/sites-available/ directory when we push the code. In a new file in the root of the project directory named nginx.conf, add the following server { listen 80; server_name your_domain_name; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8080; } location /static { alias /home/deploy/webapp/webapp/static; } } What this configuration file does is tell Nginx to listen for incoming requests on port 80 and forward all requests to the WSGI application that is listening on port 8080. Also, it makes an exception for any requests for static files and instead sends those requests directly to the file system. Bypassing uWSGI for static files gives a great performance boost, as Nginx is really good at serving static files quickly. Finally, in the fabfile.py file: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp nginx.conf " "/etc/nginx/sites-available/[your_domain]") sudo("ln -sf /etc/nginx/sites-available/your_domain " "/etc/nginx/sites-enabled/[your_domain]") sudo("service nginx restart") Apache and uWSGI Using Apache httpd with uWSGI has mostly the same setup. First off, we need an apache configuration file in a new file in the root of our project directory named apache.conf: <VirtualHost *:80> <Location /> ProxyPass / uwsgi://127.0.0.1:8080/ </Location> </VirtualHost> This file just tells Apache to pass all requests on port 80 to the uWSGI web server listening on port 8080. But, this functionality requires an extra Apache plugin from uWSGI called mod proxy uWSGI. We can install this as well as Apache in the set command: def setup(): … sudo("apt-get install -y apache2") sudo("apt-get install -y libapache2-mod-proxy-uwsgi") Finally, in the deploy command, we need to copy our Apache configuration file into Apache's configuration directory. def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp apache.conf " "/etc/apache2/sites-available/[your_domain]") sudo("ln -sf /etc/apache2/sites-available/[your_domain] " "/etc/apache2/sites-enabled/[your_domain]") sudo("service apache2 restart") Summary In this article you learnt that there are many different options to hosting your application, each having their own pros and cons. Deciding on one depends on the amount of time and money you are willing to spend as well as the total number of users you expect. Resources for Article: Further resources on this subject: Handling sessions and users[article] Snap – The Code Snippet Sharing Application[article] Man, Do I Like Templates! [article] from fabric.api import local def test():     local('python -m unittest discover')
Read more
  • 0
  • 0
  • 9985
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-animating-elements
Packt
05 Jul 2016
17 min read
Save for later

Animating Elements

Packt
05 Jul 2016
17 min read
In this article by Alex Libby, author of the book Mastering PostCSS for Web Design, you will study about animating elements. A question if you had the choice of three websites: one static, one with badly done animation, and one that has been enhanced with subtle use of animation. Which would you choose? Well, my hope is the answer to that question should be number three: animation can really make a website stand out if done well, or fail miserably if done badly! So far, our content has been relatively static, save for the use of media queries. It's time though to take a look at how PostCSS can help make animating content a little easier. We'll begin with a quick recap on the basics of animation before exploring the route to moving away from pure animation through to SASS and finally across to PostCSS. We will cover a number of topics throughout this article, which will include: A recap on the use of jQuery to animate content Switching to CSS-based animation Exploring the use of prebuilt libraries, such as Animate.css (For more resources related to this topic, see here.) Let's make a start! Revisiting basic animations Animation is quickly becoming a king in web development; more and more websites are using animations to help bring life and keep content fresh. If done correctly, they add an extra layer of experience for the end user; if done badly, the website will soon lose more custom than water through a sieve! Throughout the course of the article, we'll take a look at making the change from writing standard animation through to using processors, such as SASS, and finally, switching to using PostCSS. I can't promise you that we'll be creating complex JavaScript-based demos, such as the Caaaat animation (http://roxik.com/cat/ try resizing the window!), but we will see that using PostCSS is really easy when creating animations for the browser. To kick off our journey, we'll start with a quick look at the traditional animation. How many times have you had to use .animate() in jQuery over the years? Thankfully, we have the power of CSS3 to help with simple animations, but there was a time when we had to animate content using jQuery. As a quick reminder, try running animate.html from the T34 - Basic animation using jQuery animate() folder. It's not going to set the world on fire, but is a nice reminder of the times gone by, when we didn't know any better: If we take a look at a profile of this animation from within a DOM inspector from within a browser, such as Firefox, it would look something like this screenshot: While the numbers aren't critical, the key point here are the two dotted green lines and that the results show a high degree of inconsistent activity. This is a good indicator that activity is erratic, with a low frame count, resulting in animations that are jumpy and less than 100% smooth. The great thing though is that there are options available to help provide smoother animations; we'll take a brief look at some of the options available before making the change to using PostCSS. For now though, let's make that first step to moving away from using jQuery, beginning with a look at the options available for reducing dependency on the use of .animate() or jQuery. Moving away from jQuery Animating content can be a contentious subject, particularly if jQuery or JavaScript is used. If we were to take a straw poll of 100 people and ask which they used, it is very likely that we would get mixed answers! A key answer of "it depends" is likely to feature at or near the top of the list of responses; many will argue that animating content should be done using CSS, while others will affirm that JavaScript-based solutions still have value. Leaving this aside, shall we say lively debate? If we're looking to move away from using jQuery and in particular .animate(), then we have some options available to us: Upgrade your version of jQuery! Yes, this might sound at odds with the theme of this article, but the most recent versions of jQuery introduced the use of requestAnimationFrame, which improved performance, particularly on mobile devices. A quick and dirty route is to use the jQuery Animate Enhanced plugin, available from http://playground.benbarnett.net/jquery-animate-enhanced/ - although a little old, it still serves a useful purpose. It will (where possible) convert .animate() calls into CSS3 equivalents; it isn't able to convert all, so any that are not converted will remain as .animate() calls. Using the same principle, we can even take advantage of the JavaScript animation library, GSAP. The Greensock team have made available a plugin (from https://greensock.com/jquery-gsap-plugin) that replaces jQuery.animate() with their own GSAP library. The latter is reputed to be 20 times faster than standard jQuery! With a little effort, we can look to rework our existing code. In place of using .animate(), we can add the equivalent CSS3 style(s) into our stylesheet and replace existing calls to .animate() with either .removeClass() or .addClass(), as appropriate. We can switch to using libraries, such as Transit (http://ricostacruz.com/jquery.transit/). It still requires the use of jQuery, but gives better performance than using the standard .animate() command. Another alternative is Velocity JS by Jonathan Shapiro, available from http://julian.com/research/velocity/; this has the benefit of not having jQuery as a dependency. There is even talk of incorporating all or part of the library into jQuery, as a replacement for .animate(). For more details, check out the issue log at https://github.com/jquery/jquery/issues/2053. Many people automatically assume that CSS animations are faster than JavaScript (or even jQuery). After all, we don't need to call an external library (jQuery); we can use styles that are already baked into the browser, right? The truth is not as straightforward as this. In short, the right use of either will depend on your requirements and the limits of each method. For example, CSS animations are great for simple state changes, but if sequencing is required, then you may have to resort to using the JavaScript route. The key, however, is less in the method used, but more in how many frames per second are displayed on the screen. Most people cannot distinguish above 60fps. This produces a very smooth experience. Anything less than around 25FPS will produce blur and occasionally appear jerky – it's up to us to select the best method available, that produces the most effective solution. To see the difference in frame rate, take a look at https://frames-per-second.appspot.com/ the animations on this page can be controlled; it's easy to see why 60FPS produces a superior experience! So, which route should we take? Well, over the next few pages, we'll take a brief look at each of these options. In a nutshell, they are all methods that either improve how animations run or allow us to remove the dependency on .animate(), which we know is not very efficient! True, some of these alternatives still use jQuery, but the key here is that your existing code could be using any or a mix of these methods. All of the demos over the next few pages were run at the same time as a YouTube video was being run; this was to help simulate a little load and get a more realistic comparison. Running animations under load means less graphics processing power is available, which results in a lower FPS count. Let's kick off with a look at our first option—the Transit JS library. Animating content with Transit.js In an ideal world, any project we build will have as few dependencies as possible; this applies equally to JavaScript or jQuery-based content as CSS styling. To help with reducing dependencies, we can use libraries such as TransitJS or Velocity to construct our animations. The key here is to make use of the animations that these libraries create as a basis for applying styles that we can then manipulate using .addClass() or .removeClass(). To see what I mean, let's explore this concept with a simple demo: We'll start by opening up a copy of animate.html. To make it easier, we need to change the reference to square-small from a class to a selector: <div id="square-small"></div> Next, go ahead and add in a reference to the Transit library immediately before the closing </head> tag: <script src="js/jquery.transit.min.js"></script> The Transit library uses a slightly different syntax, so go ahead and update the call to .animate() as indicated: smallsquare.transition({x: 280}, 'slow'); Save the file and then try previewing the results in a browser. If all is well, we should see no material change in the demo. But the animation will be significantly smoother—the frame count is higher, at 44.28fps, with less dips. Let's compare this with the same profile screenshot taken for revisiting basic animations earlier in this article. Notice anything? Profiling browser activity can be complex, but there are only two things we need to concern ourselves with here: the fps value and the state of the green line. The fps value, or frames per second, is over three times higher, and for a large part, the green line is more consistent with fewer more short-lived dips. This means that we have a smoother, more consistent performance; at approximately 44fps, the average frame rate is significantly better than using standard jQuery. But we're still using jQuery! There is a difference though. Libraries such as Transit or Velocity convert animations where possible to CSS3 equivalents. If we take a peek under the covers, we can see this in the flesh: We can use this to our advantage by removing the need to use .animate() and simply use .addClass() or .removeClass(). If you would like to compare our simple animation when using Transit or Velocity, there are examples available in the code download, as demos T35A and T35B, respectively. To take it to the next step, we can use the Velocity library to create a version of our demo using plain JavaScript. We'll see how as part of the next demo. Beware though this isn't an excuse to still use JavaScript; as we'll see, there is little difference in the frame count! Animating with plain JavaScript Many developers are used to working with jQuery. After all, it makes it a cinch to reference just about any element on a page! Sometimes though, it is preferable to work in native JavaScript; this could be for speed. If we only need to support newer browsers (such as IE11 or Edge, and recent versions of Chrome or Firefox), then adding jQuery as a dependency isn't always necessary. The beauty about libraries, such as Transit (or Velocity), means that we don't always have to use jQuery to still achieve the same effect; as we'll see shortly, removing jQuery can help improve matters! Let's put this to the test and adapt our earlier demo to work without using jQuery: We'll start by extracting a copy of the T35B folder from the code download bundle. Save this to the root of our project area. Next, we need to edit a copy of animate.html within this folder. Go ahead and remove the link to jQuery and then remove the link to velocity.ui.min.js; we should be left with this in the <head> of our file: <link rel="stylesheet" type="text/css" href="css/style.css">   <script src="js/velocity.min.js"></script> </head> A little further down, alter the <script> block as shown:   <script>     var smallsquare = document.getElementById('square-small');     var animbutton = document.getElementById('animation-button');     animbutton.addEventListener("click", function() {       Velocity(document.getElementById('square-small'), {left: 280}, {duration: 'slow'});     }); </script> Save the file and then preview the results in a browser. If we monitor performance of our demo using a DOM Inspector, we can see a similar frame rate being recorded in our demo: With jQuery as a dependency no longer in the picture, we can clearly see that the frame rate is improved—the downside though is that the support is reduced for some browsers, such as IE8 or 9. This may not be an issue for your website; both Microsoft and the jQuery Core Team have announced changes to drop support for IE8-10 and IE8 respectively, which will help encourage users to upgrade to newer browsers. It has to be said though that while using CSS3 is preferable for speed and keeping our pages as lightweight as possible, using Velocity does provide a raft of extra opportunities that may be of use to your projects. The key here though is to carefully consider if you really do need them or whether CSS3 will suffice and allow you to use PostCSS. Switching classes using jQuery At this point, there is one question that comes to mind—what about using class-based animation? By this, I mean dropping any dependency on external animation libraries, and switching to use plain jQuery with either .addClass() or .removeClass() methods. In theory, it sounds like a great idea—we can remove the need to use .animate() and simply swap classes as needed, right? Well, it's an improvement, but it is still lower than using a combination of pure JavaScript and switching classes. It will all boil down to a trade-off between using the ease of jQuery to reference elements against pure JavaScript for speed, as follows: 1.      We'll start by opening a copy of animate.html from the previous exercise. First, go ahead and replace the call to VelocityJS with this line within the <head> of our document: <script src="js/jquery.min.js"></script> 2.      Next, remove the code between the <script> tags and replace it with this: var smallsquare = $('.rectangle').find('.square-small'); $('#animation-button').on("click", function() {       smallsquare.addClass("move");       smallsquare.one('transitionend', function(e) {     $('.rectangle').find('.square-small') .removeClass("move");     });  }); 3.      Save the file. If we preview the results in a browser, we should see no apparent change in how the demo appears, but the transition is marginally more performant than using a combination of jQuery and Transit. The real change in our code though, will be apparent if we take a peek under the covers using a DOM Inspector. Instead of using .animate(), we are using CSS3 animation styles to move our square-small <div>. Most browsers will accept the use of transition and transform, but it is worth running our code through a process, such as Autocomplete, to ensure we apply the right vendor prefixes to our code. The beauty about using CSS3 here is that while it might not suit large, complex animations, we can at least begin to incorporate the use of external stylesheets, such as Animate.css, or even use a preprocessor, such as SASS to create our styles. It's an easy change to make, so without further ado and as the next step on our journey to using PostCSS, let's take a look at this in more detail. If you would like to create custom keyframe-based animations, then take a look at http://cssanimate.com/, which provides a GUI-based interface for designing them and will pipe out the appropriate code when requested! Making use of prebuilt libraries Up to this point, all of our animations have had one thing in common; they are individually created and stored within the same stylesheet as other styles for each project. This will work perfectly well, but we can do better. After all, it's possible that we may well create animations that others have already built! Over time, we may also build up a series of animations that can form the basis of a library that can be reused for future projects. A number of developers have already done this. One example of note is the Animate.css library created by Dan Eden. In the meantime, let's run through a quick demo of how it works as a precursor to working with it in PostCSS. The images used in this demo are referenced directly from the LoremPixem website as placeholder images. Let's make a start: We'll start by extracting a copy of the T37 folder from the code download bundle. Save the folder to our project area. Next, open a new file and add the following code: body { background: #eee; } #gallery {   width: 745px;   height: 500px;   margin-left: auto;   margin-right: auto; }   #gallery img {   border: 0.25rem solid #fff;   margin: 20px;   box-shadow: 0.25rem 0.25rem 0.3125rem #999;   float: left; } .animated {   animation-duration: 1s; animation-fill-mode: both; } .animated:hover {   animation-duration: 1s;   animation-fill-mode: both; }  Save this as style.css in the css subfolder within the T37 folder. Go ahead and preview the results in a browser. If all is well, then we should see something akin to this screenshot: If we run the demo, we should see images run through different types of animation; there is nothing special or complicated here. The question is though, how does it all fit in with PostCSS? Well, there's a good reason for this; there will be some developers who have used Animate.css in the past and will be familiar with how it works; we will also be using a the postcss-animation plugin later in Updating code to use PostCSS, which is based on the Animate.css stylesheet library. For those of you who are not familiar with the stylesheet library though, let's quickly run through how it works within the context of our demo. Dissecting the code to our demo The effects used in our demo are quite striking. Indeed, one might be forgiven for thinking that they required a lot of complex JavaScript! This, however, could not be further from the truth. The Animate.css file contains a number of animations based on @keyframe similar to this: @keyframes bounce {   0%, 20%, 50%, 80%, 100% {transform: translateY(0);}   40% {transform: translateY(-1.875rem);}   60% {transform: translateY(-0.9375rem);} } We pull in the animations using the usual call to the library within the <head> section of our code. We can then call any animation by name from within our code:   <div id="gallery">     <a href="#"><img class="animated bounce" src="http://lorempixum.com/200/200/city/1" alt="" /></a> ...   </div>   </body> You will notice the addition of the .animated class in our code. This controls the duration and timing of the animation, which are set according to which animation name has been added to the code. The downside of not using JavaScript (or jQuery for that matter) means that the animation will only run once when the demo is loaded; we can set it to run continuously by adding the .infinite class to the element being animated (this is part of the Animate library). We can fake a click option in CSS, but it is an experimental hack that is not supported across all the browsers. To affect any form of control, we really need to use JavaScript (or even jQuery)! If you are interested in details of the hack, then take a look at this response on Stack Overflow at http://stackoverflow.com/questions/13630229/can-i-have-an-onclick-effect-in-css/32721572#32721572. Okay! Onward we go. We've covered the basic use of prebuilt libraries, such as Animate. It's time to step up a gear and make the transition to PostCSS. Summary In this article, we studied about recap on the use of jQuery to animate content. We also looked into switching to CSS-based animation. At last, we saw how to make use of prebuilt libraries in short.  Resources for Article:   Further resources on this subject: Responsive Web Design with HTML5 and CSS3 - Second Edition [article] Professional CSS3 [article] Instant LESS CSS Preprocessor How-to [article]
Read more
  • 0
  • 0
  • 9929

article-image-creating-personal-web-portal-pwp
Packt
10 Nov 2016
8 min read
Save for later

Creating a Personal Web Portal (PWP)

Packt
10 Nov 2016
8 min read
In this article by Sherwin John Calleja Tragura, author of the book Spring MVC Blueprints, we will discuss about creating a robust and simple personal web portal that can serve as a personal web page, or a professional reference site, for anyone. Usually, these kinds of web sites are used as mashups, or dashboards, of centralized sources of information describing and individual or group. (For more resources related to this topic, see here.) Technically, a personal web portal is a composition of web components like CSS, HTML, and JavaScript, woven together to create a formal, simple or exquisite presentation of any content. It can be used, in its simplest form, as a personal portfolio or an enterprise form like an e-commerce content management system. Commercially, these portals are drafted and designed using the principles of the Rich-client platform or responsive web designs. In the industry, most companies suggest that clients try easy-to-use-tools like PHP frameworks (for example, CodeIgniter, Laravel, Drupal) and seldom advise using JEE-based portals. Overview of the project The personal web portal (PWP) created publishes a simple biography, and professional information, one can at least share through the web. The prototype is a session-driven one that can do dynamic transactions, like updating information on the web pages, and posting notes on the page without using any back-end database. Through using wireframes, the following are the initial drafts and design of the web portal: The Home Page: This is the first page of the site that shows updatable quotes, and inspiring messages coming from the owner of the portal. It contains a sticky-note feature at the side that allows visitors to post their short greetings to the owner in real-time. The Personal Information Page: This page highlights personal information of the owner including the owner's name, age, hobbies, birth date, and age. This page contains some part of the blogger's educational history. The page is dynamic and can be updated at anytime by the owner. The Professional Information Page: This page presents details about the owner's career background. It lists down all the previous jobs of the account owner, and enumerates all skills-related information. This page is also updatable. The Reach Out Page: This serves as the contact information page of the owner. Moreover, it allows visitors to send their contact information, and specifically their electronic mail address, to the portal owner. Update pages: The Home, Personal and Professional pages has updateable pages for the owner to update the content of the portal. The prototype has the capability to update the information presented in the content at anytime the user desires. This simple prototype, called PWP, will give clear steps on how to build personal sites from the ground-up, using Spring MVC 4.x specifications. It will give enthusiasts the opportunity to start creating Spring-based web portals in just a day, without using any database backend. To those who are new to the Spring MVC 4.x concept, this article will be a good start in building full-blown portal sites. Technical requirements In order to start the development, the following tools need to be installed onto the platform: Java Development Kit (JDK) 1.7.x Spring Tool Suite (Eclipse) 3.6 Maven 3.x Spring Framework 4.1 Apache Tomcat 7.x Any operating system First, the JDK 1.7.x installer must be installed. Visit the site http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html to download the installer. Next, setup the Spring Tool Suite 3.6 (Eclipse-based) which will be the official Integrated Development Environment (IDE) of this article. Download the Spring Tool Suite 3.6 at https://spring.io/tools/sts. Setting-up the development environment This article recommends Spring Tool Suite (Eclipse) 3.6 since it has all the Spring Framework 4.x plug-ins, and other dependencies needed by the projects. To start us off, the following screenshot shows the dashboard of the STS IDE: Conversely, Apache Maven 3.x will be used to build and deploy the project for this article. Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information (https://maven.apache.org/). There is already a Maven plugin installed in the STS IDE that can be used to generate the needed development directory structure. Among the many ways to create Spring MVC projects, this article focuses on two styles, namely: Converting a dynamic web project to a Maven specimen Creating a Maven project from scratch Converting a dynamic web project to a Maven project To start creating the project, press CTRL + N to browse the Menu wizard of the IDE. This menu wizard contains all the types of project modules you'll need to start a project. The Menu wizard should look similar to the following screenshot: Once on the menu, browse the Web option and choose Dynamic Web Project. Afterwards, just follow the series of instructions to create the chosen project module until you reached the last menu wizard, which looks like the following figure: This last instruction (Web Module panel) will auto-generate the deployment descriptor (web.xml) of the project. Always click on the Generate web-xml deployment descriptor checkbox option. The deployment descriptor is an XML file that must reside inside the /WEB-INF/ folder of all JEE projects. This file describes how a component, module or application can be deployed. A JEE project must always be in the web.xml file otherwise the project will be defective. Since Spring 4.x container supports the Servlet Specification 3.0 in Tomcat 7 and above, web.xml is no longer mandatory and can be replaced by org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer or org.springframework.web.servlet.support.AbstractDispatcherServletInitializer class. The next major step is to convert the newly created dynamic web project to a Maven one. To complete the conversion, right-click on the project and navigate to the Configure | Convert Maven Project command set, as shown in the following screenshot: It is always best for the developer to study the directory structure of the project folder created before the actual implementation starts. Below is the directory structure of the Maven project after the conversion: The project directories are just like the usual Eclipse Dynamic Web project without the pom.xml file. Creating a Maven project from scratch Another method of creating a Spring MVC web project is by creating a Maven project from the start. Be sure to install the Maven 3.2 plugin in STS Eclipse. Browse the Menu wizard again, and locate the Maven option. Click on the Maven Project to generate a new Maven project. After clicking this option, a wizard will pop up, asking if an archetype is needed or not to create the Maven project. An archetype is a Maven plugin whose main objective is to create a project structure as per its template. To start quickly, choose an archetype plugin to create a simple java application here. It is recommended to create the project using the archetype maven-archetype-webapp. However, skipping the archetype selection can still be a valid option. After you've done this, proceed with the Select an Archetype window shown in the following screenshot. Locate maven-archetype-webapp then proceed with the last process. The selection of the Archetype maven-archetype-webapp will require the input of Maven parameters before the ending the whole process with a new Maven project: The required parameters for the Maven group or project are as follows: Group Id (groupId): This is the ID of the project's group and must be unique among all the project's groups Artifact Id (artifactId): This is the ID of the project. This is generally the name of the project Version (version): This is the version of the project Package (package): The initial or core package of the sources For more information on Maven plugin and configuration details, visit the documentation and samples on the site http://maven.apache.org/. After providing the Maven parameters, the project source folder structure will be similar to the following screenshot: Summary Using the basic Spring Framework 4.x APIs, web portal creators can create their own platform to promote their personal philosophy, business, ideology, religion, and other concepts. Although it is an advantage to use existing portal platforms made in other language like PHP and Python, it is still fulfilling to design and develop our own portal based on an open-source framework. The PWP is just prototype software that needs to be upgraded to have a backend database, security, and other social media plugins, in order to make the software commercially competitive. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] ASP.NET MVC 2: Validating MVC [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 9877

article-image-testing-ui-using-webdriverjs
Packt
17 Feb 2015
30 min read
Save for later

Testing a UI Using WebDriverJS

Packt
17 Feb 2015
30 min read
In this article, by the author, Enrique Amodeo, of the book, Learning Behavior-driven Development with JavaScript, we will look into an advanced concept: how to test a user interface. For this purpose, you will learn the following topics: Using WebDriverJS to manipulate a browser and inspect the resulting HTML generated by our UI Organizing our UI codebase to make it easily testable The right abstraction level for our UI tests (For more resources related to this topic, see here.) Our strategy for UI testing There are two traditional strategies towards approaching the problem of UI testing: record-and-replay tools and end-to-end testing. The first approach, record-and-replay, leverages the use of tools capable of recording user activity in the UI and saves this into a script file. This script file can be later executed to perform exactly the same UI manipulation as the user performed and to check whether the results are exactly the same. This approach is not very compatible with BDD because of the following reasons: We cannot test-first our UI. To be able to use the UI and record the user activity, we first need to have most of the code of our application in place. This is not a problem in the waterfall approach, where QA and testing are performed after the codification phase is finished. However, in BDD, we aim to document the product features as automated tests, so we should write the tests before or during the coding. The resulting test scripts are low-level and totally disconnected from the problem domain. There is no way to use them as a live documentation for the requirements of the system. The resulting test suite is brittle and it will stop working whenever we make slight changes, even cosmetic ones, to the UI. The problem is that the tools record the low-level interaction with the system that depends on technical details of the HTML. The other classic approach is end-to-end testing, where we do not only test the UI layer, but also most of the system or even the whole of it. To perform the setup of the tests, the most common approach is to substitute the third-party systems with test doubles. Normally, the database is under the control of the development team, so some practitioners use a regular database for the setup. However, we could use an in-memory database or even mock the DAOs. In any case, this approach prompts us to create an integrated test suite where we are not only testing the correctness of the UI, but the business logic as well. In the context of this discussion, an integrated test is a test that checks several layers of abstraction, or subsystems, in combination. Do not confuse it with the act of testing several classes or functions together. This approach is not inherently against BDD; for example, we could use Cucumber.js to capture the features of the system and implement Gherkin steps using WebDriver to drive the UI and make assertions. In fact, for most people, when you say BDD they always interpret this term to refer to this kind of test. We will end up writing a lot of test cases, because we need to combine the scenarios from the business logic domain with the ones from the UI domain. Furthermore, in which language should we formulate the tests? If we use the UI language, maybe it will be too low-level to easily describe business concepts. If we use the business domain language, maybe we will not be able to test the important details of the UI because they are too low-level. Alternatively, we can even end up with tests that mix UI language with business terminology, so they will neither be focused nor very clear to anyone. Choosing the right tests for the UI If we want to test whether the UI works, why should we test the business rules? After all, this is already tested in the BDD test suite of the business logic layer. To decide which tests to write, we should first determine the responsibilities of the UI layer, which are as follows: Presenting the information provided by the business layer to the user in a nice way. Transforming user interaction into requests for the business layer. Controlling the changes in the appearance of the UI components, which includes things such as enabling/disabling controls, highlighting entry fields, showing/hiding UI elements, and so on. Orchestration between the UI components. Transferring and adapting information between the UI components and navigation between pages fall under this category. We do not need to write tests about business rules, and we should not assume much about the business layer itself, apart from a loose contract. How we should word our tests? We should use a UI-related language when we talk about what the user sees and does. Words such as fields, buttons, forms, links, click, hover, highlight, enable/disable, or show and hide are relevant in this context. However, we should not go too far; otherwise, our tests will be too brittle. Saying, for example, that the name field should have a pink border is too low-level. The moment that the designer decides to use red instead of pink, or changes his mind and decides to change the background color instead of the border, our test will break. We should aim for tests that express the real intention of the user interface; for example, the name field should be highlighted as incorrect. The testing architecture At this point, we could write tests relevant for our UI using the following testing architecture: A simple testing architecture for our UI We can use WebDriver to issue user gestures to interact with the browser. These user gestures are transformed by the browser in to DOM events that are the inputs of our UI logic and will trigger operations on it. We can use WebDriver again to read the resulting HTML in the assertions. We can simply use a test double to impersonate our server, so we can set up our tests easily. This architecture is very simple and sounds like a good plan, but it is not! There are three main problems here: UI testing is very slow. Take into account that the boot time and shutdown phase can take 3 seconds in a normal laptop. Each UI interaction using WebDriver can take between 50 and 100 milliseconds, and the latency with the fake server can be an extra 10 milliseconds. This gives us only around 10 tests per second, plus an extra 3 seconds. UI tests are complex and difficult to diagnose when they fail. What is failing? Our selectors used to tell WebDriver how to find the relevant elements. Some race condition we were not aware of? A cross-browser issue? Also note that our test is now distributed between two different processes, a fact that always makes debugging more difficult. UI tests are inherently brittle. We can try to make them less brittle with best practices, but even then a change in the structure of the HTML code will sometimes break our tests. This is a bad thing because the UI often changes more frequently than the business layer. As UI testing is very risky and expensive, we should try to code as less amount of tests that interact with the UI as possible. We can achieve this without losing testing power, with the following testing architecture:   A smarter testing architecture We have now split our UI layer into two components: the view and the UI logic. This design aligns with the family of MV* design patterns. In the context of this article, the view corresponds with a passive view, and the UI logic corresponds with the controller or the presenter, in combination with the model. A passive view is usually very hard to test; so in this article we will focus mostly on how to do it. You will often be able to easily separate the passive view from the UI logic, especially if you are using an MV* pattern, such as MVC, MVP, or MVVM. Most of our tests will be for the UI logic. This is the component that implements the client-side validation, orchestration of UI components, navigation, and so on. It is the UI logic component that has all the rules about how the user can interact with the UI, and hence it needs to maintain some kind of internal state. The UI logic component can be tested completely in memory using standard techniques. We can simply mock the XMLHttpRequest object, or the corresponding object in the framework we are using, and test everything in memory using a single Node.js process. No interaction with the browser and the HTML is needed, so these tests will be blazingly fast and robust. Then we need to test the view. This is a very thin component with only two responsibilities: Manipulating and updating the HTML to present the user with the information whenever it is instructed to do so by the UI logic component Listening for HTML events and transforming them into suitable requests for the UI logic component The view should not have more responsibilities, and it is a stateless component. It simply does not need to store the internal state, because it only transforms and transmits information between the HTML and the UI logic. Since it is the only component that interacts with the HTML, it is the only one that needs to be tested using WebDriver. The point of all of this is that the view can be tested with only a bunch of tests that are conceptually simple. Hence, we minimize the number and complexity of the tests that need to interact with the UI. WebDriverJS Testing the passive view layer is a technical challenge. We not only need to find a way for our test to inject native events into the browser to simulate user interaction, but we also need to be able to inspect the DOM elements and inject and execute scripts. This was very challenging to do approximately 5 years ago. In fact, it was considered complex and expensive, and some practitioners recommended not to test the passive view. After all, this layer is very thin and mostly contains the bindings of the UI to the HTML DOM, so the risk of error is not supposed to be high, specially if we use modern cross-browser frameworks to implement this layer. Nonetheless, nowadays the technology has evolved, and we can do this kind of testing without much fuss if we use the right tools. One of these tools is Selenium 2.0 (also known as WebDriver) and its library for JavaScript, which is WebDriverJS (https://code.google.com/p/selenium/wiki/WebDriverJs).  In this book, we will use WebDriverJS, but there are other bindings in JavaScript for Selenium 2.0, such as WebDriverIO (http://webdriver.io/). You can use the one you like most or even try both. The point is that the techniques I will show you here can be applied with any client of WebDriver or even with other tools that are not WebDriver. Selenium 2.0 is a tool that allows us to make direct calls to a browser automation API. This way, we can simulate native events, we can access the DOM, and we can control the browser. Each browser provides a different API and has its own quirks, but Selenium 2.0 will offer us a unified API called the WebDriver API. This allows us to interact with different browsers without changing the code of our tests. As we are accessing the browser directly, we do not need a special server, unless we want to control browsers that are on a different machine. Actually, this is only true, due some technical limitations, if we want to test against a Google Chrome or a Firefox browser using WebDriverJS. So, basically, the testing architecture for our passive view looks like this: Testing with WebDriverJS We can see that we use WebDriverJS for the following: Sending native events to manipulate the UI, as if we were the user, during the action phase of our tests Inspecting the HTML during the assert phase of our test Sending small scripts to set up the test doubles, check them, and invoke the update method of our passive view Apart from this, we need some extra infrastructure, such as a web server that serves our test HTML page and the components we want to test. As is evident from the diagram, the commands of WebDriverJS require some network traffic to able to send the appropriate request to the browser automation API, wait for the browser to execute, and get the result back through the network. This forces the API of WebDriverJS to be asynchronous in order to not block unnecessarily. That is why WebDriverJS has an API designed around promises. Most of the methods will return a promise or an object whose methods return promises. This plays perfectly well with Mocha and Chai.  There is a W3C specification for the WebDriver API. If you want to have a look, just visit https://dvcs.w3.org/hg/webdriver/raw-file/default/webdriver-spec.html. The API of WebDriverJS is a bit complex, and you can find its official documentation at http://selenium.googlecode.com/git/docs/api/javascript/module_selenium-webdriver.html. However, to follow this article, you do not need to read it, since I will now show you the most important API that WebDriverJS offers us. Finding and interacting with elements It is very easy to find an HTML element using WebDriverJS; we just need to use either the findElement or the findElements methods. Both methods receive a locator object specifying which element or elements to find. The first method will return the first element it finds, or simply fail with an exception, if there are no elements matching the locator. The findElements method will return a promise for an array with all the matching elements. If there are no matching elements, the promised array will be empty and no error will be thrown. How do we specify which elements we want to find? To do so, we need to use a locator object as a parameter. For example, if we would like to find the element whose identifier is order_item1, then we could use the following code: var By = require('selenium-webdriver').By;   driver.findElement(By.id('order_item1')); We need to import the selenium-webdriver module and capture its locator factory object. By convention, we store this locator factory in a variable called By. Later, we will see how we can get a WebDriverJS instance. This code is very expressive, but a bit verbose. There is another version of this: driver.findElement({ id: 'order_item1' }); Here, the locator criteria is passed in the form of a plain JSON object. There is no need to use the By object or any factory. Which version is better? Neither. You just use the one you like most. In this article, the plain JSON locator will be used. The following are the criteria for finding elements: Using the tag name, for example, to locate all the <li> elements in the document: driver.findElements(By.tagName('li'));driver.findElements({ tagName: 'li' }); We can also locate using the name attribute. It can be handy to locate the input fields. The following code will locate the first element named password: driver.findElement(By.name('password')); driver.findElement({ name: 'password' }); Using the class name; for example, the following code will locate the first element that contains a class called item: driver.findElement(By.className('item')); driver.findElement({ className: 'item' }); We can use any CSS selector that our target browser understands. If the target browser does not understand the selector, it will throw an exception; for example, to find the second item of an order (assuming there is only one order on the page): driver.findElement(By.css('.order .item:nth-of-type(2)')); driver.findElement({ css: '.order .item:nth-of-type(2)' }); Using only the CSS selector you can locate any element, and it is the one I recommend. The other ones can be very handy in specific situations. There are more ways of locating elements, such as linkText, partialLinkText, or xpath, but I seldom use them. Locating elements by their text, such as in linkText or partialLinkText, is brittle because small changes in the wording of the text can break the tests. Also, locating by xpath is not as useful in HTML as using a CSS selector. Obviously, it can be used if the UI is defined as an XML document, but this is very rare nowadays. In both methods, findElement and findElements, the resulting HTML elements are wrapped as a WebElement object. This object allows us to send an event to that element or inspect its contents. Some of its methods that allow us to manipulate the DOM are as follows: clear(): This will do nothing unless WebElement represents an input control. In this case, it will clear its value and then trigger a change event. It returns a promise that will be fulfilled whenever the operation is done. sendKeys(text or key, …): This will do nothing unless WebElement is an input control. In this case, it will send the equivalents of keyboard events to the parameters we have passed. It can receive one or more parameters with a text or key object. If it receives a text, it will transform the text into a sequence of keyboard events. This way, it will simulate a user typing on a keyboard. This is more realistic than simply changing the value property of an input control, since the proper keyDown, keyPress, and keyUp events will be fired. A promise is returned that will be fulfilled when all the key events are issued. For example, to simulate that a user enters some search text in an input field and then presses Enter, we can use the following code: var Key = require('selenium-webdriver').Key;   var searchField = driver.findElement({name: 'searchTxt'}); searchField.sendKeys('BDD with JS', Key.ENTER);  The webdriver.Key object allows us to specify any key that does not represent a character, such as Enter, the up arrow, Command, Ctrl, Shift, and so on. We can also use its chord method to represent a combination of several keys pressed at the same time. For example, to simulate Alt + Command + J, use driver.sendKeys(Key.chord(Key.ALT, Key.COMMAND, 'J'));. click(): This will issue a click event just in the center of the element. The returned promise will be fulfilled when the event is fired.  Sometimes, the center of an element is nonclickable, and an exception is thrown! This can happen, for example, with table rows, since the center of a table row may just be the padding between cells! submit(): This will look for the form that contains this element and will issue a submit event. Apart from sending events to an element, we can inspect its contents with the following methods: getId(): This will return a promise with the internal identifier of this element used by WebDriver. Note that this is not the value of the DOM ID property! getText(): This will return a promise that will be fulfilled with the visible text inside this element. It will include the text in any child element and will trim the leading and trailing whitespaces. Note that, if this element is not displayed or is hidden, the resulting text will be an empty string! getInnerHtml() and getOuterHtml(): These will return a promise that will be fulfilled with a string that contains innerHTML or outerHTML of this element. isSelected(): This will return a promise with a Boolean that determines whether the element has either been selected or checked. This method is designed to be used with the <option> elements. isEnabled(): This will return a promise with a Boolean that determines whether the element is enabled or not. isDisplayed(): This will return a promise with a Boolean that determines whether the element is displayed or not. Here, "displayed" is taken in a broad sense; in general, it means that the user can see the element without resizing the browser. For example, whether the element is hidden, whether it has diplay: none, or whether it has no size, or is in an inaccessible part of the document, the returned promise will be fulfilled as false. getTagName(): This will return a promise with the tag name of the element. getSize(): This will return a promise with the size of the element. The size comes as a JSON object with width and height properties that indicate the height and width in pixels of the bounding box of the element. The bounding box includes padding, margin, and border. getLocation(): This will return a promise with the position of the element. The position comes as a JSON object with x and y properties that indicate the coordinates in pixels of the element relative to the page. getAttribute(name): This will return a promise with the value of the specified attribute. Note that WebDriver does not distinguish between attributes and properties! If there is neither an attribute nor a property with that name, the promise will be fulfilled as null. If the attribute is a "boolean" HTML attribute (such as checked or disabled), the promise will be evaluated as true only if the attribute is present. If there is both an attribute and a property with the same name, the attribute value will be used.  If you really need to be precise about getting an attribute or a property, it is much better to use an injected script to get it. getCssValue(cssPropertyName): This will return a promise with a string that represents the computed value of the specified CSS property. The computed value is the resulting value after the browser has applied all the CSS rules and the style and class attributes. Note that the specific representation of the value depends on the browser; for example, the color property can be returned as red, #ff0000, or rgb(255, 0, 0) depending on the browser. This is not cross-browser, so we should avoid this method in our tests. findElement(locator) and findElements(locator): These will return an element, or all the elements that are the descendants of this element, and match the locator. isElementPresent(locator): This will return a promise with a Boolean that indicates whether there is at least one descendant element that matches this locator. As you can see, the WebElement API is pretty simple and allows us to do most of our tests easily. However, what if we need to perform some complex interaction with the UI, such as drag-and-drop? Complex UI interaction WebDriverJS allows us to define a complex action gesture in an easy way using the DSL defined in the webdriver.ActionSequence object. This DSL allows us to define any sequence of browser events using the builder pattern. For example, to simulate a drag-and-drop gesture, proceed with the following code: var beverageElement = driver.findElement({ id: 'expresso' });var orderElement = driver.findElement({ id: 'order' });driver.actions()    .mouseMove(beverageElement)    .mouseDown()    .mouseMove(orderElement)    .mouseUp()    .perform(); We want to drag an espresso to our order, so we move the mouse to the center of the espresso and press the mouse. Then, we move the mouse, by dragging the element, over the order. Finally, we release the mouse button to drop the espresso. We can add as many actions we want, but the sequence of events will not be executed until we call the perform method. The perform method will return a promise that will be fulfilled when the full sequence is finished. The webdriver.ActionSequence object has the following methods: sendKeys(keys...): This sends a sequence of key events, exactly as we saw earlier, to the method with the same name in the case of WebElement. The difference is that the keys will be sent to the document instead of a specific element. keyUp(key) and keyDown(key): These send the keyUp and keyDown events. Note that these methods only admit the modifier keys: Alt, Ctrl, Shift, command, and meta. mouseMove(targetLocation, optionalOffset): This will move the mouse from the current location to the target location. The location can be defined either as a WebElement or as page-relative coordinates in pixels, using a JSON object with x and y properties. If we provide the target location as a WebElement, the mouse will be moved to the center of the element. In this case, we can override this behavior by supplying an extra optional parameter indicating an offset relative to the top-left corner of the element. This could be needed in the case that the center of the element cannot receive events. mouseDown(), click(), doubleClick(), and mouseUp(): These will issue the corresponding mouse events. All of these methods can receive zero, one, or two parameters. Let's see what they mean with the following examples: var Button = require('selenium-webdriver').Button;   // to emit the event in the center of the expresso element driver.actions().mouseDown(expresso).perform(); // to make a right click in the current position driver.actions().click(Button.RIGHT).perform(); // Middle click in the expresso element driver.actions().click(expresso, Button.MIDDLE).perform();  The webdriver.Button object defines the three possible buttons of a mouse: LEFT, RIGHT, and MIDDLE. However, note that mouseDown() and mouseUp() only support the LEFT button! dragAndDrop(element, location): This is a shortcut to performing a drag-and-drop of the specified element to the specified location. Again, the location can be WebElement of a page-relative coordinate. Injecting scripts We can use WebDriver to execute scripts in the browser and then wait for its results. There are two methods for this: executeScript and executeAsyncScript. Both methods receive a script and an optional list of parameters and send the script and the parameters to the browser to be executed. They return a promise that will be fulfilled with the result of the script; it will be rejected if the script failed. An important detail is how the script and its parameters are sent to the browser. For this, they need to be serialized and sent through the network. Once there, they will be deserialized, and the script will be executed inside an autoexecuted function that will receive the parameters as arguments. As a result of of this, our scripts cannot access any variable in our tests, unless they are explicitly sent as parameters. The script is executed in the browser with the window object as its execution context (the value of this). When passing parameters, we need to take into consideration the kind of data that WebDriver can serialize. This data includes the following: Booleans, strings, and numbers. The null and undefined values. However, note that undefined will be translated as null. Any function will be transformed to a string that contains only its body. A WebElement object will be received as a DOM element. So, it will not have the methods of WebElement but the standard DOM method instead. Conversely, if the script results in a DOM element, it will be received as WebElement in the test. Arrays and objects will be converted to arrays and objects whose elements and properties have been converted using the preceding rules. With this in mind, we could, for example, retrieve the identifier of an element, such as the following one: var elementSelector = ".order ul > li"; driver.executeScript(     "return document.querySelector(arguments[0]).id;",     elementSelector ).then(function(id) {   expect(id).to.be.equal('order_item0'); }); Notice that the script is specified as a string with the code. This can be a bit awkward, so there is an alternative available: var elementSelector = ".order ul > li"; driver.executeScript(function() {     var selector = arguments[0];     return document.querySelector(selector).id; }, elementSelector).then(function(id) {   expect(id).to.be.equal('order_item0'); }); WebDriver will just convert the body of the function to a string and send it to the browser. Since the script is executed in the browser, we cannot access the elementSelector variable, and we need to access it through parameters. Unfortunately, we are forced to retrieve the parameters using the arguments pseudoarray, because WebDriver have no way of knowing the name of each argument. As its name suggest, executeAsyncScript allows us to execute an asynchronous script. In this case, the last argument provided to the script is always a callback that we need to call to signal that the script has finalized. The result of the script will be the first argument provided to that callback. If no argument or undefined is explicitly provided, then the result will be null. Note that this is not directly compatible with the Node.js callback convention and that any extra parameters passed to the callback will be ignored. There is no way to explicitly signal an error in an asynchronous way. For example, if we want to return the value of an asynchronous DAO, then proceed with the following code: driver.executeAsyncScript(function() {   var cb = arguments[1],       userId = arguments[0];   window.userDAO.findById(userId).then(cb, cb); }, 'user1').then(function(userOrError) {   expect(userOrError).to.be.equal(expectedUser); }); Command control flows All the commands in WebDriverJS are asynchronous and return a promise or WebElement. How do we execute an ordered sequence of commands? Well, using promises could be something like this: return driver.findElement({name:'quantity'}).sendKeys('23')     .then(function() {       return driver.findElement({name:'add'}).click();     })     .then(function() {       return driver.findElement({css:firstItemSel}).getText();     })     .then(function(quantity) {       expect(quantity).to.be.equal('23');     }); This works because we wait for each command to finish before issuing the next command. However, it is a bit verbose. Fortunately, with WebDriverJS we can do the following: driver.findElement({name:'quantity'}).sendKeys('23'); driver.findElement({name:'add'}).click(); return expect(driver.findElement({css:firstItemSel}).getText())     .to.eventually.be.equal('23'); How can the preceding code work? Because whenever we tell WebDriverJS to do something, it simply schedules the requested command in a queue-like structure called the control flow. The point is that each command will not be executed until it reaches the top of the queue. This way, we do not need to explicitly wait for the sendKeys command to be completed before executing the click command. The sendKeys command is scheduled in the control flow before click, so the latter one will not be executed until sendKeys is done. All the commands are scheduled against the same control flow queue that is associated with the WebDriver object. However, we can optionally create several control flows if we want to execute commands in parallel: var flow1 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); var flow2 = webdriver.promise.createFlow(function() {   var driver = new webdriver.Builder().build();     // do something with driver here }); webdriver.promise.fullyResolved([flow1, flow2]).then(function(){   // Wait for flow1 and flow2 to finish and do something }); We need to create each control flow instance manually and, inside each flow, create a separate WebDriver instance. The commands in both flows will be executed in parallel, and we can wait for both of them to be finalized to do something else using fullyResolved. In fact, we can even nest flows if needed to create a custom parallel command-execution graph. Taking screenshots Sometimes, it is useful to take some screenshots of the current screen for debugging purposes. This can be done with the takeScreenshot() method. This method will return a promise that will be fulfilled with a string that contains a base-64 encoded PNG. It is our responsibility to save this string as a PNG file. The following snippet of code will do the trick: driver.takeScreenshot()     .then(function(shot) {       fs.writeFileSync(fileFullPath, shot, 'base64');     });  Note that not all browsers support this capability. Read the documentation for the specific browser adapter to see if it is available. Working with several tabs and frames WebDriver allows us to control several tabs, or windows, for the same browser. This can be useful if we want to test several pages in parallel or if our test needs to assert or manipulate things in several frames at the same time. This can be done with the switchTo() method that will return a webdriver.WebDriver.TargetLocator object. This object allows us to change the target of our commands to a specific frame or window. It has the following three main methods: frame(nameOrIndex): This will switch to a frame with the specified name or index. It will return a promise that is fulfilled when the focus has been changed to the specified frame. If we specify the frame with a number, this will be interpreted as a zero-based index in the window.frames array. window(windowName): This will switch focus to the window named as specified. The returned promise will be fulfilled when it is done. alert(): This will switch the focus to the active alert window. We can dismiss an alert with driver.switchTo().alert().dismiss();. The promise returned by these methods will be rejected if the specified window, frame, or alert window is not found. To make tests on several tabs at the same time, we must ensure that they do not share any kind of state, or interfere with each other through cookies, local storage, or an other kind of mechanism. Summary This article showed us that a good way to test the UI of an application is actually to split it into two parts and test them separately. One part is the core logic of the UI that takes responsibility for control logic, models, calls to the server, validations, and so on. This part can be tested in a classic way, using BDD, and mocking the server access. No new techniques are needed for this, and the tests will be fast. Here, we can involve nonengineer stakeholders, such as UX designers, users, and so on, to write some nice BDD features using Gherkin and Cucumber.js. The other part is a thin view layer that follows a passive view design. It only updates the HTML when it is asked for, and listens to DOM events to transform them as requests to the core logic UI layer. This layer has no internal state or control rules; it simply transforms data and manipulates the DOM. We can use WebDriverJS to test the view. This is a good approach because the most complex part of the UI can be fully test-driven easily, and the hard and slow parts to test the view do not need many tests since they are very simple. In this sense, the passive view should not have a state; it should only act as a proxy of the DOM. Resources for Article: Further resources on this subject: Dart With Javascript [article] Behavior-Driven Development With Selenium WebDriver [article] Event-Driven Programming [article]
Read more
  • 0
  • 0
  • 9859

article-image-optimizing-javascript-ios-hybrid-apps
Packt
01 Apr 2015
17 min read
Save for later

Optimizing JavaScript for iOS Hybrid Apps

Packt
01 Apr 2015
17 min read
In this article by Chad R. Adams, author of the book, Mastering JavaScript High Performance, we are going to take a look at the process of optimizing JavaScript for iOS web apps (also known as hybrid apps). We will take a look at some common ways of debugging and optimizing JavaScript and page performance, both in a device's web browser and a standalone app's web view. Also, we'll take a look at the Apple Web Inspector and see how to use it for iOS development. Finally, we will also gain a bit of understanding about building a hybrid app and learn the tools that help to better build JavaScript-focused apps for iOS. Moreover, we'll learn about a class that might help us further in this. We are going to learn about the following topics in the article: Getting ready for iOS development iOS hybrid development (For more resources related to this topic, see here.) Getting ready for iOS development Before starting this article with Xcode examples and using iOS Simulator, I will be displaying some native code and will use tools that haven't been covered in this course. Mobile app developments, regardless of platform, are books within themselves. When covering the build of the iOS project, I will be briefly going over the process of setting up a project and writing non-JavaScript code to get our JavaScript files into a hybrid iOS WebView for development. This is essential due to the way iOS secures its HTML5-based apps. Apps on iOS that use HTML5 can be debugged, either from a server or from an app directly, as long as the app's project is built and deployed in its debug setting on a host system (meaning the developers machine). Readers of this article are not expected to know how to build a native app from the beginning to the end. And that's completely acceptable, as you can copy-and-paste, and follow along as I go. But I will show you the code to get us to the point of testing JavaScript code, and the code used will be the smallest and the fastest possible to render your content. All of these code samples will be hosted as an Xcode project solution of some type on Packt Publishing's website, but they will also be shown here if you want to follow along, without relying on code samples. Now with that said, lets get started… iOS hybrid development Xcode is the IDE provided by Apple to develop apps for both iOS devices and desktop devices for Macintosh systems. As a JavaScript editor, it has pretty basic functions, but Xcode should be mainly used in addition to a project's toolset for JavaScript developers. It provides basic code hinting for JavaScript, HTML, and CSS, but not more than that. To install Xcode, we will need to start the installation process from the Mac App Store. Apple, in recent years, has moved its IDE to the Mac App Store for faster updates to developers and subsequently app updates for iOS and Mac applications. Installation is easy; simply log in with your Apple ID in the Mac App Store and download Xcode; you can search for it at the top or, if you look in the right rail among popular free downloads, you can find a link to the Xcode Mac App Store page. Once you reach this, click Install as shown in the following screenshot: It's important to know that, for the sake of simplicity in this article, we will not deploy an app to a device; so if you are curious about it, you will need to be actively enrolled in Apple's Developer Program. The cost is 99 dollars a year, or 299 dollars for an enterprise license that allows deployment of an app outside the control of the iOS App Store. If you're curious to learn more about deploying to a device, the code in this article will run on the device assuming that your certificates are set up on your end. For more information on this, check out Apple's iOS Developer Center documentation online at https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40012582. Once it's installed, we can open up Xcode and look at the iOS Simulator; we can do this by clicking XCode, followed by Open Developer Tool, and then clicking on iOS Simulator. Upon first opening iOS Simulator, we will see what appears to be a simulation of an iOS device, shown in the next screenshot. Note that this is a simulation, not a real iOS device (even if it feels pretty close). A neat trick for JavaScript developers working with local HTML files outside an app is that they can quickly drag-and-drop an HTML file. Due to this, the simulator will open the mobile version of Safari, the built-in browser for iPhone and iPads, and render the page as it would do on an iOS device; this is pretty helpful when testing pages before deploying them to a web server. Setting up a simple iOS hybrid app JavaScript performance on a built-in hybrid application can be much slower than the same page run on the mobile version of Safari. To test this, we are going to build a very simple web browser using Apple's new programming language Swift. Swift is an iOS-ready language that JavaScript developers should feel at home with. Swift itself follows a syntax similar to JavaScript but, unlike JavaScript, variables and objects can be given types allowing for stronger, more accurate coding. In that regard, Swift follows syntax similar to what can be seen in the ECMAScript 6 and TypeScript styles of coding practice. If you are checking these newer languages out, I encourage you to check out Swift as well. Now let's create a simple web view, also known as a UIWebView, which is the class used to create a web view in an iOS app. First, let's create a new iPhone project; we are using an iPhone to keep our app simple. Open Xcode and select the Create new XCode project project; then, as shown in the following screenshot, select the Single View Application option and click the Next button. On the next view of the wizard, set the product name as JS_Performance, the language to Swift, and the device to iPhone; the organization name should autofill with your name based on your account name in the OS. The organization identifier is a reverse domain name unique identifier for our app; this can be whatever you deem appropriate. For instructional purposes, here's my setup: Once your project names are set, click the Next button and save to a folder of your choice with Git repository left unchecked. When that's done, select Main.storyboard under your Project Navigator, which is found in the left panel. We should be in the storyboard view now. Let's open the Object Library, which can be found in the lower-right panel in the subtab with an icon of a square inside a circle. Search for Web View in the Object Library in the bottom-right search bar, and then drag that to the square view that represents our iOS view. We need to consider two more things before we link up an HTML page using Swift; we need to set constraints as native iOS objects will be stretched to fit various iOS device windows. To fill the space, you can add the constraints by selecting the UIWebView object and pressing Command + Option + Shift + = on your Mac keyboard. Now you should see a blue border appear briefly around your UIWebView. Lastly, we need to connect our UIWebView to our Swift code; for this, we need to open the Assistant Editor by pressing Command + Option + Return on the keyboard. We should see ViewController.swift open up in a side panel next to our Storyboard. To link this as a code variable, right-click (or option-click the UIWebView object) and, with the button held down, drag the UIWebView to line number 12 in the ViewController.swift code in our Assistant Editor. This is shown in the following diagram: Once that's done, a popup will appear. Now leave everything the same as it comes up, but set the name to webview; this will be the variable referencing our UIWebView. With that done, save your Main.storyboard file and navigate to your ViewController.swift file. Now take a look at the Swift code shown in the following screenshot, and copy it into the project; the important part is on line 19, which contains the filename and type loaded into the web view; which in this case, this is index.html. Now obviously, we don't have an index.html file, so let's create one. Go to File and then select New followed by the New File option. Next, under iOS select Empty Application and click Next to complete the wizard. Save the file as index.html and click Create. Now open the index.html file, and type the following code into the HTML page: <br />Hello <strong>iOS</strong> Now click Run (the play button in the main iOS task bar), and we should see our HTML page running inside our own app, as shown here: That's nice work! We built an iOS app with Swift (even if it's a simple app). Let's create a structured HTML page; we will override our Hello iOS text with the HTML shown in the following screenshot: Here, we use the standard console.time function and print a message to our UIWebView page when finished; if we hit Run in Xcode, we will see the Loop Completed message on load. But how do we get our performance information? How can we get our console.timeEnd function code on line 14 on our HTML page? Using Safari web inspector for JavaScript performance Apple does provide a Web Inspector for UIWebViews, and it's the same inspector for desktop Safari. It's easy to use, but has an issue: the inspector only works on iOS Simulators and devices that have started from an Xcode project. This limitation is due to security concerns for hybrid apps that may contain sensitive JavaScript code that could be exploited if visible. Let's check our project's embedded HTML page console. First, open desktop Safari on your Mac and enable developer mode. Launch the Preferences option. Under the Advanced tab, ensure that the Show develop menu in menu bar option is checked, as shown in the following screenshot: Next, let's rerun our Xcode project, start up iOS Simulator and then rerun our page. Once our app is running with the Loop Completed result showing, open desktop Safari and click Develop, then iOS Simulator, followed by index.html. If you look closely, you will see iOS simulator's UIWebView highlighted in blue when you place the mouse over index.html; a visible page is seen as shown in the following screenshot: Once we release the mouse on index.html, we Safari's Web Inspector window appears featuring our hybrid iOS app's DOM and console information. The Safari's Web Inspector is pretty similar to Chrome's Developer tools in terms of feature sets; the panels used in the Developer tools also exist as icons in Web Inspector. Now let's select the Console panel in Web Inspector. Here, we can see our full console window including our Timer console.time function test included in the for loop. As we can see in the following screenshot, the loop took 0.081 milliseconds to process inside iOS. Comparing UIWebView with Mobile Safari What if we wanted to take our code and move it to Mobile Safari to test? This is easy enough; as mentioned earlier in the article, we can drag-and-drop the index.html file into our iOS Simulator, and then the OS will open the mobile version of Safari and load the page for us. With that ready, we will need to reconnect Safari Web Inspector to the iOS Simulator and reload the page. Once that's done, we can see that our console.time function is a bit faster; this time it's roughly 0.07 milliseconds, which is a full .01 milliseconds faster than UIWebView, as shown here: For a small app, this is minimal in terms of a difference in performance. But, as an application gets larger, the delay in these JavaScript processes gets longer and longer. We can also debug the app using the debugging inspector in the Safari's Web Inspector tool. Click Debugger in the top menu panel in Safari's Web Inspector. We can add a break point to our embedded script by clicking a line number and then refreshing the page with Command + R. In the following screenshot, we can see the break occurring on page load, and we can see our scope variable displayed for reference in the right panel: We can also check page load times using the timeline inspector. Click Timelines at the top of the Web Inspector and now we will see a timeline similar to the Resources tab found in Chrome's Developer tools. Let's refresh our page with Command + R on our keyboard; the timeline then processes the page. Notice that after a few seconds, the timeline in the Web Inspector stops when the page fully loads, and all JavaScript processes stop. This is a nice feature when you're working with the Safari Web Inspector as opposed to Chrome's Developer tools. Common ways to improve hybrid performance With hybrid apps, we can use all the techniques for improving performance using a build system such as Grunt.js or Gulp.js with NPM, using JSLint to better optimize our code, writing code in an IDE to create better structure for our apps, and helping to check for any excess code or unused variables in our code. We can use best performance practices such as using strings to apply an HTML page (like the innerHTML property) rather than creating objects for them and applying them to the page that way, and so on. Sadly, the fact that hybrid apps do not perform as well as native apps still holds true. Now, don't let that dismay you as hybrid apps do have a lot of good features! Some of these are as follows: They are (typically) faster to build than using native code They are easier to customize They allow for rapid prototyping concepts for apps They are easier to hand off to other JavaScript developers rather than finding a native developer They are portable; they can be reused for another platform (with some modification) for Android devices, Windows Modern apps, Windows Phone apps, Chrome OS, and even Firefox OS They can interact with native code using helper libraries such as Cordova At some point, however, application performance will be limited to the hardware of the device, and it's recommended you move to native code. But, how do we know when to move? Well, this can be done using Color Blended Layers. The Color Blended Layers option applies an overlay that highlights slow-performing areas on the device display, for example, green for good performance and red for slow performance; the darker the color is, the more impactful will be the performance result. Rerun your app using Xcode and, in the Mac OS toolbar for iOS Simulator, select Debug and then Color Blended Layers. Once we do that, we can see that our iOS Simulator shows a green overlay; this shows us how much memory iOS is using to process our rendered view, both native and non-native code, as shown here: Currently, we can see a mostly green overlay with the exception of the status bar elements, which take up more render memory as they overlay the web view and have to be redrawn over that object repeatedly. Let's make a copy of our project and call it JS_Performance_CBL, and let's update our index.html code with this code sample, as shown in the following screenshot: Here, we have a simple page with an empty div; we also have a button with an onclick function called start. Our start function will update the height continuously using the setInterval function, increasing the height every millisecond. Our empty div also has a background gradient assigned to it with an inline style tag. CSS background gradients are typically a huge performance drain on mobile devices as they can potentially re-render themselves over and over as the DOM updates itself. Some other issues include listener events; some earlier or lower-end devices do not have enough RAM to apply an event listener to a page. Typically, it's a good practice to apply onclick attributes to HTML either inline or through JavaScript. Going back to the gradient example, let's run this in iOS Simulator and enable Color Blended Layers after clicking our HTML button to trigger the JavaScript animation. As expected, our div element that we've expanded now has a red overlay indicating that this is a confirmed performance issue, which is unavoidable. To correct this, we would need to remove the CSS gradient background, and it would show as green again. However, if we had to include a gradient in accordance with a design spec, a native version would be required. When faced with UI issues such as these, it's important to understand tools beyond normal developer tools and Web Inspectors, and take advantage of the mobile platform tools that provide better analysis of our code. Now, before we wrap this article, let's take note of something specific for iOS web views. The WKWebView framework At the time of writing, Apple has announced the WebKit framework, a first-party iOS library intended to replace UIWebView with more advanced and better performing web views; this was done with the intent of replacing apps that rely on HTML5 and JavaScript with better performing apps as a whole. The WebKit framework, also known in developer circles as WKWebView, is a newer web view that can be added to a project. WKWebView is also the base class name for this framework. This framework includes many features that native iOS developers can take advantage of. These include listening for function calls that can trigger native Objective-C or Swift code. For JavaScript developers like us, it includes a faster JavaScript runtime called Nitro, which has been included with Mobile Safari since iOS6. Hybrid apps have always run worse that native code. But with the Nitro JavaScript runtime, HTML5 has equal footing with native apps in terms of performance, assuming that our view doesn't consume too much render memory as shown in our color blended layers example. WKWebView does have limitations though; it can only be used for iOS8 or higher and it doesn't have built-in Storyboard or XIB support like UIWebView. So, using this framework may be an issue if you're new to iOS development. Storyboards are simply XML files coded in a specific way for iOS user interfaces to be rendered, while XIB files are the precursors to Storyboard. XIB files allow for only one view whereas Storyboards allow multiple views and can link between them too. If you are working on an iOS app, I encourage you to reach out to your iOS developer lead and encourage the use of WKWebView in your projects. For more information, check out Apple's documentation of WKWebView at their developer site at https://developer.apple.com/library/IOs/documentation/WebKit/Reference/WKWebView_Ref/index.html. Summary In this article, we learned the basics of creating a hybrid-application for iOS using HTML5 and JavaScript; we learned about connecting the Safari Web Inspector to our HTML page while running an application in iOS Simulator. We also looked at Color Blended Layers for iOS Simulator, and saw how to test for performance from our JavaScript code when it's applied to device-rendering performance issues. Now we are down to the wire. As for all JavaScript web apps before they go live to a production site, we need to smoke-test our JavaScript and web app code and see if we need to perform any final improvements before final deployment. Resources for Article: Further resources on this subject: GUI Components in Qt 5 [article] The architecture of JavaScriptMVC [article] JavaScript Promises – Why Should I Care? [article]
Read more
  • 0
  • 0
  • 9664
article-image-html5-presentations-creating-our-initial-presentation
Packt
24 May 2013
10 min read
Save for later

HTML5 Presentations: Creating our initial presentation

Packt
24 May 2013
10 min read
Creating our initial presentation This recipe will teach you how to create a basic HTML5 presentation using reveal.js. We will cover the initial markup and setup. The presentation created here will be used throughout the rest of the book. Getting ready Before we start, we need to download the reveal.js script and assets. To do that, simply visit https://github.com/hakimel/reveal.js/downloads and get the latest version of reveal. Now open your favorite text editor and let's work on the presentation markup. How to do it... To create our initial presentation perform the following steps: The first step is to create an HTML file and save it as index.html. Then copy the reveal.js files to the same directory of our HTML file, keeping the following structure (highlighted files are the ones we are using in this recipe): index.html -css ---reveal.min.css ---print ---shaders ---theme -----default.css -----source -----template -js ---reveal.min.js -lib ---css ---font ---js -----head.min.js -plugin ---highlight ---markdown ---notes ---notes-server ---postmessage ---remotes ---zoom-js Now we need to write our initial HTML5 markup, as follows: <!doctype html> <html> <head> <meta charset="utf-8"> <title>HTML5 Presentations How-To</title> </head> <body> <div class="reveal"> <div class="slides"> </div> </div> </body> </html> With our initial markup ready we include the reveal.js base stylesheet and script file plus the default theme stylesheet. We will also need to include the HeadJS JavaScript loader in order to manage reveal.js plugins. The stylesheets will go directly below the title tag, while the script files must be included before the closing body tag: <head> <meta charset="utf-8"> <title>HTML5 Presentations How-To</title> <link rel="stylesheet" href = "css/reveal.min.css"> <link rel="stylesheet" href = "css/theme/default.css"> </head> <body> ... <script src = "lib/js/head.min.js"></script> <script src = "js/reveal.min.js"></script> </body> The fourth step is to create the elements for our slides. Our first presentation will have four slides, so we must add four section elements to our slides div, as follows: <div class="reveal"> <div class="slides"> <section> <h1>HTML5 Presentations</h1> <h3>How to create presentations for web browsers</h3> </section> <section> <h1>About this presentation</h1> <p>This is the example presentation for the book HTML5 Presentations How-To</p> </section> <section> <h1>reveal.js</h1> <ul> <li>created by Hakim El Hattab</li> <li>open source</li> </ul> </section> <section> <h1>References</h1> <ol> <li><a href = "http://hakim.se/">http://hakim.se/</a></li> <li><a href = "https://github.com/hakimel/reveal. js">https://github.com/hakimel/reveal.js</a></li> </ol> </section> </div> </div> After that, we initialize the reveal.js JavaScript object, right after the reveal.js script tag, as follows: <body> ... <script src = "js/reveal.min.js"></script> <script> Reveal.initialize({ controls: true, keyboard: true, center: true, transition: 'default' }); </script> </body> For the last step, open index.html using the browser of your choice and check out how our initial HTML5 presentation looks, as shown in the following screenshot: How it works... reveal.js is a powerful JavaScript library that allows us to create presentations using a web browser. It was created by Hakim El Hattab, a web developer known for his experiments with CSS3 transitions and JavaScript animations. By performing the preceding steps we created a regular HTML5 presentation. We have used reveal.js default themes and some minor configurations just to get started. We started with a basic HTML5 structure. It is important to bear in mind that our presentation is, before anything else, an HTML and it should be rendered normally in the browser. The presentation must follow a structure expected by the reveal.js script. First, our HTML must have one div with reveal as a class attribute. That div must contain another div with slides as a class attribute. The slides div must contain one or more section elements, each one representing a slide. As we will see in the Creating vertical/nested slides (Simple) recipe, it is possible to nest slides, creating a vertical navigation. Each slide can contain any HTML or markdown content. We started with just a few elements but we will enrich our presentation through each recipe. reveal.js comes with multiple themes, available under the css/theme directory. For our initial presentation, we are using the default.css theme. Other options would be beige, night, serif, simple, and sky. Theming will be covered in the Theming (Medium) recipe. With all the markup and styles set, we initialized the reveal.js JavaScript object by using the initialize method. We used only four options, as follows: Reveal.initialize({ controls: true, keyboard: true, center: true, transition: 'default' }); First we enabled the navigation controls at the user interface, setting the controls option to true. We activated keyboard navigation using the arrow keys and also centered the slides' content. The last option sets the transition between slides to default. The next recipe will cover the full list of available options. There's more... reveal.js will work on any browser, but modern browsers are preferred due to animations and 3D effects. We will also need to use external plugins in order to extend our presentation features. Dependencies As previously mentioned, reveal.js uses the HeadJS library to manage plugins and dependencies. You can use only reveal.js if you want, but some useful resources (such as speaker notes and zoom) are only enabled using plugins. The plugins can be loaded through the initialize method and we will see more about them in the Plugins and extensions (Medium) recipe. Overview and blackout modes While on our presentation, pressing the Esc key will switch to overview mode where you can interact with every slide. Another useful resource is the blackout mode. If you want to pause and draw attention from your audience, you can press the B key to alternate between normal and blackout modes, and hide your current slide data. Browser support reveal.js will work better on browsers with full support for CSS 3D animations (Chrome, Safari, Firefox, and Internet Explorer 10), but fallbacks (such as 2D transitions) are available for older browsers. Presentations will not work on Internet Explorer 7 or lower. In this recipe we have created our initial HTML5 presentation using reveal.js default options and theme. Using reveal.js JavaScript API reveal.js provides an extended API to control our presentation behavior. With reveal.js API methods we can move to previous and next slides, get the current index, toggle overview mode, and more. How to do it… To use reveal.js JavaScript API perform the following steps: Using reveal.js API we will create a navigation bar that will be positioned at the bottom of our presentation screen, containing links to our main slides plus links to the first, next, previous, and last slides. We will start by adding some HTML5 markup, including reveal.js API method calls inside the onclick attribute of our navigation elements: <body> <div id="navigation-bar"> <nav> <a href="#" onclick="Reveal.slide(0);">&laquo;&laquo; first</a> <a href="#" onclick="Reveal.prev();">&laquo; previous</a> <a href="#" onclick="Reveal.slide(0);">html5 presentations</ a> <a href="#" onclick="Reveal.slide(1);">about this presentation</a> <a href="#" onclick="Reveal.slide(2);">reveal.js</a> <a href="#" onclick="Reveal.slide(3);">references</a> <a href="#" onclick="Reveal.slide(4);">about the author</a> <a href="#" onclick="Reveal.slide(5);">why use reveal.js</a> <a href="#" onclick="Reveal.next();">next &raquo;</a> <a href="#" onclick="Reveal.slide(5);">last &raquo;&raquo;</ a> </nav> </div> … </body> Our navigation bar will need a CSS of its own. Let's add some new style rules to our style tag: <style> … #navigation-bar { position: fixed; left: 0; bottom: 2px; width: 100%; height: 50px; z-index: 20; background-color: #fff; border-top: 6px solid #df0d32; font-family: Helvetica, Arial, sans-serif; } #navigation-bar nav { padding: 15px 0; text-align: center; } #navigation-bar a { display: inline-block; padding: 0 10px; border-right: 2px solid #ccc; text-decoration: none; color: #df0d32; } #navigation-bar a:last-child { border-right: none; } #navigation-bar a:hover { text-decoration: underline; } </style> If you reload the presentation now, you will see our brand new navigation bar at the bottom, and if you click on the links, you will be taken directly to the referenced slides or actions (first, next, previous, last). How it works... reveal.js JavaScript API allows us to control our presentation behavior using JavaScript code. In the preceding example we have used almost all methods available. Let's take a closer look at them: Option Description Reveal.slide(indexh, indexv, indexf) Slides to the specified index (the indexh parameter). The first slide will have 0 as index, the second has 1, and so on. You can also specify the vertical index (the indexv parameter) in case of nested slides and the fragment index (the indexf parameter) inside the slide. Reveal.left() Goes to the previous slide in the horizontal line of the presentation. Reveal.right() Goes to the next slide in the horizontal line of the presentation. Reveal.up() Changes to the following nested slide. Reveal.down() Returns to the previous nested slide. Reveal.prev() Goes to the previous slide, horizontal or vertical. Reveal.next() Goes to the next slide, horizontal or vertical. Reveal.prevFragment() Switches to the previous fragment inside the current slide. Reveal.nextFragment() Switches to the next fragment inside the current slide. Reveal.toggleOverview() Toggles overview mode. Reveal.getPreviousSlide() Returns the DOM element of the previous slide. Reveal.getCurrentSlide() Returns the DOM element of the next slide. Reveal.getIndices() Returns an object with the current indices, for example {h: 2, v: 1}. The preceding methods are available to use in any JavaScript code after the reveal.js object has been initialized. You can either use it directly on elements' attributes (such as onclick) or call it directly from JavaScript functions and listeners. Summary In this article, the Creating our initial presentation recipe explained how we can create a basic HTML5 presentation and the Using reveal.js JavaScript API recipe explored Reveal.js JavaScript API by creating a custom navigation bar. Resources for Article : Further resources on this subject: HTML5: Audio and Video Elements [Article] HTML5: Developing Rich Media Applications using Canvas [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 9640

article-image-introducing-jax-rs-api
Packt
21 Sep 2015
25 min read
Save for later

Introducing JAX-RS API

Packt
21 Sep 2015
25 min read
 In this article by Jobinesh Purushothaman, author of the book, RESTful Java Web Services, Second Edition, we will see that there are many tools and frameworks available in the market today for building RESTful web services. There are some recent developments with respect to the standardization of various framework APIs by providing unified interfaces for a variety of implementations. Let's take a quick look at this effort. (For more resources related to this topic, see here.) As you may know, Java EE is the industry standard for developing portable, robust, scalable, and secure server-side Java applications. The Java EE 6 release took the first step towards standardizing RESTful web service APIs by introducing a Java API for RESTful web services (JAX-RS). JAX-RS is an integral part of the Java EE platform, which ensures portability of your REST API code across all Java EE-compliant application servers. The first release of JAX-RS was based on JSR 311. The latest version is JAX-RS 2 (based on JSR 339), which was released as part of the Java EE 7 platform. There are multiple JAX-RS implementations available today by various vendors. Some of the popular JAX-RS implementations are as follows: Jersey RESTful web service framework: This framework is an open source framework for developing RESTful web services in Java. It serves as a JAX-RS reference implementation. You can learn more about this project at https://jersey.java.net. Apache CXF: This framework is an open source web services framework. CXF supports both JAX-WS and JAX-RS web services. To learn more about CXF, refer to http://cxf.apache.org. RESTEasy: This framework is an open source project from JBoss, which provides various modules to help you build a RESTful web service. To learn more about RESTEasy, refer to http://resteasy.jboss.org. Restlet: This framework is a lightweight, open source RESTful web service framework. It has good support for building both scalable RESTful web service APIs and lightweight REST clients, which suits mobile platforms well. You can learn more about Restlet at http://restlet.com. Remember that you are not locked down to any specific vendor here, the RESTful web service APIs that you build using JAX-RS will run on any JAX-RS implementation as long as you do not use any vendor-specific APIs in the code. JAX-RS annotations                                      The main goal of the JAX-RS specification is to make the RESTful web service development easier than it has been in the past. As JAX-RS is a part of the Java EE platform, your code becomes portable across all Java EE-compliant servers. Specifying the dependency of the JAX-RS API To use JAX-RS APIs in your project, you need to add the javax.ws.rs-api JAR file to the class path. If the consuming project uses Maven for building the source, the dependency entry for the javax.ws.rs-api JAR file in the Project Object Model (POM) file may look like the following: <dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0.1</version><!-- set the tight version --> <scope>provided</scope><!-- compile time dependency --> </dependency> Using JAX-RS annotations to build RESTful web services Java annotations provide the metadata for your Java class, which can be used during compilation, during deployment, or at runtime in order to perform designated tasks. The use of annotations allows us to create RESTful web services as easily as we develop a POJO class. Here, we leave the interception of the HTTP requests and representation negotiations to the framework and concentrate on the business rules necessary to solve the problem at hand. If you are not familiar with Java annotations, go through the tutorial available at http://docs.oracle.com/javase/tutorial/java/annotations/. Annotations for defining a RESTful resource REST resources are the fundamental elements of any RESTful web service. A REST resource can be defined as an object that is of a specific type with the associated data and is optionally associated to other resources. It also exposes a set of standard operations corresponding to the HTTP method types such as the HEAD, GET, POST, PUT, and DELETE methods. @Path The @javax.ws.rs.Path annotation indicates the URI path to which a resource class or a class method will respond. The value that you specify for the @Path annotation is relative to the URI of the server where the REST resource is hosted. This annotation can be applied at both the class and the method levels. A @Path annotation value is not required to have leading or trailing slashes (/), as you may see in some examples. The JAX-RS runtime will parse the URI path templates in the same way even if they have leading or trailing slashes. Specifying the @Path annotation on a resource class The following code snippet illustrates how you can make a POJO class respond to a URI path template containing the /departments path fragment: import javax.ws.rs.Path; @Path("departments") public class DepartmentService { //Rest of the code goes here } The /department path fragment that you see in this example is relative to the base path in the URI. The base path typically takes the following URI pattern: http://host:port/<context-root>/<application-path>. Specifying the @Path annotation on a resource class method The following code snippet shows how you can specify @Path on a method in a REST resource class. Note that for an annotated method, the base URI is the effective URI of the containing class. For instance, you will use the URI of the following form to invoke the getTotalDepartments() method defined in the DepartmentService class: /departments/count, where departments is the @Path annotation set on the class. import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("departments") public class DepartmentService { @GET @Path("count") @Produces("text/plain") public Integer getTotalDepartments() { return findTotalRecordCount(); } //Rest of the code goes here } Specifying variables in the URI path template It is very common that a client wants to retrieve data for a specific object by passing the desired parameter to the server. JAX-RS allows you to do this via the URI path variables as discussed here. The URI path template allows you to define variables that appear as placeholders in the URI. These variables would be replaced at runtime with the values set by the client. The following example illustrates the use of the path variable to request for a specific department resource. The URI path template looks like /departments/{id}. At runtime, the client can pass an appropriate value for the id parameter to get the desired resource from the server. For instance, the URI path of the /departments/10 format returns the IT department details to the caller. The following code snippet illustrates how you can pass the department ID as a path variable for deleting a specific department record. The path URI looks like /departments/10. import javax.ws.rs.Path; import javax.ws.rs.DELETE; @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") short id) { removeDepartmentEntity(id); } //Other methods removed for brevity } In the preceding code snippet, the @PathParam annotation is used for copying the value of the path variable to the method parameter. Restricting values for path variables with regular expressions JAX-RS lets you use regular expressions in the URI path template for restricting the values set for the path variables at runtime by the client. By default, the JAX-RS runtime ensures that all the URI variables match the following regular expression: [^/]+?. The default regular expression allows the path variable to take any character except the forward slash (/). What if you want to override this default regular expression imposed on the path variable values? Good news is that JAX-RS lets you specify your own regular expression for the path variables. For example, you can set the regular expression as given in the following code snippet in order to ensure that the department name variable present in the URI path consists only of lowercase and uppercase alphanumeric characters: @DELETE @Path("{name: [a-zA-Z][a-zA-Z_0-9]}") public void removeDepartmentByName(@PathParam("name") String deptName) { //Method implementation goes here } If the path variable does not match the regular expression set of the resource class or method, the system reports the status back to the caller with an appropriate HTTP status code, such as 404 Not Found, which tells the caller that the requested resource could not be found at this moment. Annotations for specifying request-response media types The Content-Type header field in HTTP describes the body's content type present in the request and response messages. The content types are represented using the standard Internet media types. A RESTful web service makes use of this header field to indicate the type of content in the request or response message body. JAX-RS allows you to specify which Internet media types of representations a resource can produce or consume by using the @javax.ws.rs.Produces and @javax.ws.rs.Consumes annotations, respectively. @Produces The @javax.ws.rs.Produces annotation is used for defining the Internet media type(s) that a REST resource class method can return to the client. You can define this either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can produce are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml The following example uses the @Produces annotation at the class level in order to set the default response media type as JSON for all resource methods in this class. At runtime, the binding provider will convert the Java representation of the return value to the JSON format. import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("departments") @Produces(MediaType.APPLICATION_JSON) public class DepartmentService{ //Class implementation goes here... } @Consumes The @javax.ws.rs.Consumes annotation defines the Internet media type(s) that the resource class methods can accept. You can define the @Consumes annotation either at the class level (which will get defaulted for all methods) or the method level. The method-level annotations override the class-level annotations. The possible Internet media types that a REST API can consume are as follows: application/atom+xml application/json application/octet-stream application/svg+xml application/xhtml+xml application/xml text/html text/plain text/xml multipart/form-data application/x-www-form-urlencoded The following example illustrates how you can use the @Consumes attribute to designate a method in a class to consume a payload presented in the JSON media type. The binding provider will copy the JSON representation of an input message to the Department parameter of the createDepartment() method. import javax.ws.rs.Consumes; import javax.ws.rs.core.MediaType; import javax.ws.rs.POST; @POST @Consumes(MediaType.APPLICATION_JSON) public void createDepartment(Department entity) { //Method implementation goes here… } The javax.ws.rs.core.MediaType class defines constants for all media types supported in JAX-RS. To learn more about the MediaType class, visit the API documentation available at http://docs.oracle.com/javaee/7/api/javax/ws/rs/core/MediaType.html. Annotations for processing HTTP request methods In general, RESTful web services communicate over HTTP with the standard HTTP verbs (also known as method types) such as GET, PUT, POST, DELETE, HEAD, and OPTIONS. @GET A RESTful system uses the HTTP GET method type for retrieving the resources referenced in the URI path. The @javax.ws.rs.GET annotation designates a method of a resource class to respond to the HTTP GET requests. The following code snippet illustrates the use of the @GET annotation to make a method respond to the HTTP GET request type. In this example, the REST URI for accessing the findAllDepartments() method may look like /departments. The complete URI path may take the following URI pattern: http://host:port/<context-root>/<application-path>/departments. //imports removed for brevity @Path("departments") public class DepartmentService { @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartments() { //Find all departments from the data store List<Department> departments = findAllDepartmentsFromDB(); return departments; } //Other methods removed for brevity } @PUT The HTTP PUT method is used for updating or creating the resource pointed by the URI. The @javax.ws.rs.PUT annotation designates a method of a resource class to respond to the HTTP PUT requests. The PUT request generally has a message body carrying the payload. The value of the payload could be any valid Internet media type such as the JSON object, XML structure, plain text, HTML content, or binary stream. When a request reaches a server, the framework intercepts the request and directs it to the appropriate method that matches the URI path and the HTTP method type. The request payload will be mapped to the method parameter as appropriate by the framework. The following code snippet shows how you can use the @PUT annotation to designate the editDepartment() method to respond to the HTTP PUT request. The payload present in the message body will be converted and copied to the department parameter by the framework: @PUT @Path("{id}") @Consumes(MediaType.APPLICATION_JSON) public void editDepartment(@PathParam("id") Short id, Department department) { //Updates department entity to data store updateDepartmentEntity(id, department); } @POST The HTTP POST method posts data to the server. Typically, this method type is used for creating a resource. The @javax.ws.rs.POST annotation designates a method of a resource class to respond to the HTTP POST requests. The following code snippet shows how you can use the @POST annotation to designate the createDepartment() method to respond to the HTTP POST request. The payload present in the message body will be converted and copied to the department parameter by the framework: @POST public void createDepartment(Department department) { //Create department entity in data store createDepartmentEntity(department); } @DELETE The HTTP DELETE method deletes the resource pointed by the URI. The @javax.ws.rs.DELETE annotation designates a method of a resource class to respond to the HTTP DELETE requests. The following code snippet shows how you can use the @DELETE annotation to designate the removeDepartment() method to respond to the HTTP DELETE request. The department ID is passed as the path variable in this example. @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short id) { //remove department entity from data store removeDepartmentEntity(id); } @HEAD The @javax.ws.rs.HEAD annotation designates a method to respond to the HTTP HEAD requests. This method is useful for retrieving the metadata present in the response headers, without having to retrieve the message body from the server. You can use this method to check whether a URI pointing to a resource is active or to check the content size by using the Content-Length response header field, and so on. The JAX-RS runtime will offer the default implementations for the HEAD method type if the REST resource is missing explicit implementation. The default implementation provided by runtime for the HEAD method will call the method designated for the GET request type, ignoring the response entity retuned by the method. @OPTIONS The @javax.ws.rs.OPTIONS annotation designates a method to respond to the HTTP OPTIONS requests. This method is useful for obtaining a list of HTTP methods allowed on a resource. The JAX-RS runtime will offer a default implementation for the OPTIONS method type, if the REST resource is missing an explicit implementation. The default implementation offered by the runtime sets the Allow response header to all the HTTP method types supported by the resource. Annotations for accessing request parameters You can use this offering to extract the following parameters from a request: a query, URI path, form, cookie, header, and matrix. Mostly, these parameters are used in conjunction with the GET, POST, PUT, and DELETE methods. @PathParam A URI path template, in general, has a URI part pointing to the resource. It can also take the path variables embedded in the syntax; this facility is used by clients to pass parameters to the REST APIs as appropriate. The @javax.ws.rs.PathParam annotation injects (or binds) the value of the matching path parameter present in the URI path template into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. Typically, this annotation is used in conjunction with the HTTP method type annotations such as @GET, @POST, @PUT, and @DELETE. The following example illustrates the use of the @PathParam annotation to read the value of the path parameter, id, into the deptId method parameter. The URI path template for this example looks like /departments/{id}: //Other imports removed for brevity javax.ws.rs.PathParam @Path("departments") public class DepartmentService { @DELETE @Path("{id}") public void removeDepartment(@PathParam("id") Short deptId) { removeDepartmentEntity(deptId); } //Other methods removed for brevity } The REST API call to remove the department resource identified by id=10 looks like DELETE /departments/10 HTTP/1.1. We can also use multiple variables in a URI path template. For example, we can have the URI path template embedding the path variables to query a list of departments from a specific city and country, which may look like /departments/{country}/{city}. The following code snippet illustrates the use of @PathParam to extract variable values from the preceding URI path template: @Produces(MediaType.APPLICATION_JSON) @Path("{country}/{city} ") public List<Department> findAllDepartments( @PathParam("country") String countyCode, @PathParam("city") String cityCode) { //Find all departments from the data store for a country //and city List<Department> departments = findAllMatchingDepartmentEntities(countyCode, cityCode ); return departments; } @QueryParam The @javax.ws.rs.QueryParam annotation injects the value(s) of a HTTP query parameter into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of @QueryParam to extract the value of the desired query parameter present in the URI. This example extracts the value of the query parameter, name, from the request URI and copies the value into the deptName method parameter. The URI that accesses the IT department resource looks like /departments?name=IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } @MatrixParam Matrix parameters are another way of defining parameters in the URI path template. The matrix parameters take the form of name-value pairs in the URI path, where each pair is preceded by semicolon (;). For instance, the URI path that uses a matrix parameter to list all departments in Bangalore city looks like /departments;city=Bangalore. The @javax.ws.rs.MatrixParam annotation injects the matrix parameter value into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet demonstrates the use of the @MatrixParam annotation to extract the matrix parameters present in the request. The URI path used in this example looks like /departments;name=IT;city=Bangalore. @GET @Produces(MediaType.APPLICATION_JSON) @Path("matrix") public List<Department> findAllDepartmentsByNameWithMatrix(@MatrixParam("name") String deptName, @MatrixParam("city") String locationCode) { List<Department> depts=findAllDepartmentsFromDB(deptName, city); return depts; } You can use PathParam, QueryParam, and MatrixParam to pass the desired search parameters to the REST APIs. Now, you may ask when to use what? Although there are no strict rules here, a very common practice followed by many is to use PathParam to drill down to the entity class hierarchy. For example, you may use the URI of the following form to identify an employee working in a specific department: /departments/{dept}/employees/{id}. QueryParam can be used for specifying attributes to locate the instance of a class. For example, you may use URI with QueryParam to identify employees who have joined on January 1, 2015, which may look like /employees?doj=2015-01-01. The MatrixParam annotation is not used frequently. This is useful when you need to make a complex REST style query to multiple levels of resources and subresources. MatrixParam is applicable to a particular path element, while the query parameter is applicable to the entire request. @HeaderParam The HTTP header fields provide necessary information about the request and response contents in HTTP. For example, the header field, Content-Length: 348, for an HTTP request says that the size of the request body content is 348 octets (8-bit bytes). The @javax.ws.rs.HeaderParam annotation injects the header values present in the request into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example extracts the referrer header parameter and logs it for audit purposes. The referrer header field in HTTP contains the address of the previous web page from which a request to the currently processed page originated: @POST public void createDepartment(@HeaderParam("Referer") String referer, Department entity) { logSource(referer); createDepartmentInDB(department); } Remember that HTTP provides a very wide selection of headers that cover most of the header parameters that you are looking for. Although you can use custom HTTP headers to pass some application-specific data to the server, try using standard headers whenever possible. Further, avoid using a custom header for holding properties specific to a resource, or the state of the resource, or parameters directly affecting the resource. @CookieParam The @javax.ws.rs.CookieParam annotation injects the matching cookie parameters present in the HTTP headers into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following code snippet uses the Default-Dept cookie parameter present in the request to return the default department details: @GET @Path("cook") @Produces(MediaType.APPLICATION_JSON) public Department getDefaultDepartment(@CookieParam("Default-Dept") short departmentId) { Department dept=findDepartmentById(departmentId); return dept; } @FormParam The @javax.ws.rs.FormParam annotation injects the matching HTML form parameters present in the request body into a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The request body carrying the form elements must have the content type specified as application/x-www-form-urlencoded. Consider the following HTML form that contains the data capture form for a department entity. This form allows the user to enter the department entity details: <!DOCTYPE html> <html> <head> <title>Create Department</title> </head> <body> <form method="POST" action="/resources/departments"> Department Id: <input type="text" name="departmentId"> <br> Department Name: <input type="text" name="departmentName"> <br> <input type="submit" value="Add Department" /> </form> </body> </html> Upon clicking on the submit button on the HTML form, the department details that you entered will be posted to the REST URI, /resources/departments. The following code snippet shows the use of the @FormParam annotation for extracting the HTML form fields and copying them to the resource class method parameter: @Path("departments") public class DepartmentService { @POST //Specifies content type as //"application/x-www-form-urlencoded" @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void createDepartment(@FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) { createDepartmentEntity(departmentId, departmentName); } } @DefaultValue The @javax.ws.rs.DefaultValue annotation specifies a default value for the request parameters accessed using one of the following annotations: PathParam, QueryParam, MatrixParam, CookieParam, FormParam, or HeaderParam. The default value is used if no matching parameter value is found for the variables annotated using one of the preceding annotations. The following REST resource method will make use of the default value set for the from and to method parameters if the corresponding query parameters are found missing in the URI path: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsInRange (@DefaultValue("0") @QueryParam("from") Integer from, @DefaultValue("100") @QueryParam("to") Integer to) { findAllDepartmentEntitiesInRange(from, to); } @Context The JAX-RS runtime offers different context objects, which can be used for accessing information associated with the resource class, operating environment, and so on. You may find various context objects that hold information associated with the URI path, request, HTTP header, security, and so on. Some of these context objects also provide the utility methods for dealing with the request and response content. JAX-RS allows you to reference the desired context objects in the code via dependency injection. JAX-RS provides the @javax.ws.rs.Context annotation that injects the matching context object into the target field. You can specify the @Context annotation on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The following example illustrates the use of the @Context annotation to inject the javax.ws.rs.core.UriInfo context object into a method variable. The UriInfo instance provides access to the application and request URI information. This example uses UriInfo to read the query parameter present in the request URI path template, /departments/IT: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName( @Context UriInfo uriInfo){ String deptName = uriInfo.getPathParameters().getFirst("name"); List<Department> depts= findAllMatchingDepartmentEntities (deptName); return depts; } Here is a list of the commonly used classes and interfaces, which can be injected using the @Context annotation: javax.ws.rs.core.Application: This class defines the components of a JAX-RS application and supplies additional metadata javax.ws.rs.core.UriInfo: This interface provides access to the application and request URI information javax.ws.rs.core.Request: This interface provides a method for request processing such as reading the method type and precondition evaluation. javax.ws.rs.core.HttpHeaders: This interface provides access to the HTTP header information javax.ws.rs.core.SecurityContext: This interface provides access to security-related information javax.ws.rs.ext.Providers: This interface offers the runtime lookup of a provider instance such as MessageBodyReader, MessageBodyWriter, ExceptionMapper, and ContextResolver javax.ws.rs.ext.ContextResolver<T>: This interface supplies the requested context to the resource classes and other providers javax.servlet.http.HttpServletRequest: This interface provides the client request information for a servlet javax.servlet.http.HttpServletResponse: This interface is used for sending a response to a client javax.servlet.ServletContext: This interface provides methods for a servlet to communicate with its servlet container javax.servlet.ServletConfig: This interface carries the servlet configuration parameters @BeanParam The @javax.ws.rs.BeanParam annotation allows you to inject all matching request parameters into a single bean object. The @BeanParam annotation can be set on a class field, a resource class bean property (the getter method for accessing the attribute), or a method parameter. The bean class can have fields or properties annotated with one of the request parameter annotations, namely @PathParam, @QueryParam, @MatrixParam, @HeaderParam, @CookieParam, or @FormParam. Apart from the request parameter annotations, the bean can have the @Context annotation if there is a need. Consider the example that we discussed for @FormParam. The createDepartment() method that we used in that example has two parameters annotated with @FormParam: public void createDepartment( @FormParam("departmentId") short departmentId, @FormParam("departmentName") String departmentName) Let's see how we can use @BeanParam for the preceding method to give a more logical, meaningful signature by grouping all the related fields into an aggregator class, thereby avoiding too many parameters in the method signature. The DepartmentBean class that we use for this example is as follows: public class DepartmentBean { @FormParam("departmentId") private short departmentId; @FormParam("departmentName") private String departmentName; //getter and setter for the above fields //are not shown here to save space } The following code snippet demonstrates the use of the @BeanParam annotation to inject the DepartmentBean instance that contains all the FormParam values extracted from the request message body: @POST public void createDepartment(@BeanParam DepartmentBean deptBean) { createDepartmentEntity(deptBean.getDepartmentId(), deptBean.getDepartmentName()); } @Encoded By default, the JAX-RS runtime decodes all request parameters before injecting the extracted values into the target variables annotated with one of the following annotations: @FormParam, @PathParam, @MatrixParam, or @QueryParam. You can use @javax.ws.rs.Encoded to disable the automatic decoding of the parameter values. With the @Encoded annotation, the value of parameters will be provided in the encoded form itself. This annotation can be used on a class, method, or parameters. If you set this annotation on a method, it will disable decoding for all parameters defined for this method. You can use this annotation on a class to disable decoding for all parameters of all methods. In the following example, the value of the path parameter called name is injected into the method parameter in the URL encoded form (without decoding). The method implementation should take care of the decoding of the values in such cases: @GET @Produces(MediaType.APPLICATION_JSON) public List<Department> findAllDepartmentsByName(@QueryParam("name") String deptName) { //Method body is removed for brevity } URL encoding converts a string into a valid URL format, which may contain alphabetic characters, numerals, and some special characters supported in the URL string. To learn about the URL specification, visit http://www.w3.org/Addressing/URL/url-spec.html. Summary With the use of annotations, the JAX-RS API provides a simple development model for RESTful web service programming. In case you are interested in knowing other Java RESTful Web Services books that Packt has in store for you, here is the link: RESTful Java Web Services, Jose Sandoval RESTful Java Web Services Security, René Enríquez, Andrés Salazar C Resources for Article: Further resources on this subject: The Importance of Securing Web Services[article] Understanding WebSockets and Server-sent Events in Detail[article] Adding health checks [article]
Read more
  • 0
  • 0
  • 9512

article-image-building-friend-networks-django-10
Packt
14 Oct 2009
6 min read
Save for later

Building Friend Networks with Django 1.0

Packt
14 Oct 2009
6 min read
An important aspect of socializing in our application is letting users to maintain their friend lists and browse through the bookmarks of their friends. So, in this section we will build a data model to maintain user relationships, and then program two views to enable users to manage their friends and browse their friends' bookmarks. Creating the friendship data model Let's start with the data model for the friends feature. When a user adds another user as a friend, we need to maintain both users in one object. Therefore, the Friendship data model will consist of two references to the User objects involved in the friendship. Create this model by opening the bookmarks/models.py file and inserting the following code in it: class Friendship(models.Model): from_friend = models.ForeignKey( User, related_name='friend_set' ) to_friend = models.ForeignKey( User, related_name='to_friend_set' ) def __unicode__(self): return u'%s, %s' % ( self.from_friend.username, self.to_friend.username ) class Meta: unique_together = (('to_friend', 'from_friend'), ) The Friendship data model starts with defining two fields that are User objects: from_friend and to_friend. from_friend is the user who added to_friend as a friend. As you can see, we passed a keyword argument called related_name to both the fields. The reason for this is that both fields are foreign keys that refer back to the User data model. This will cause Django to try to create two attributes called friendship_set in each User object, which would result in a name conflict. To avoid this problem, we provide a specific name for each attribute. Consequently, each User object will contain two new attributes: user.friend_set, which contains the friends of this user and user.to_friend_set, which contains the users who added this user as a friend. Throughout this article, we will only use the friend_set attribute, but the other one is there in case you need it . Next, we defined a __unicode__ method in our data model. This method is useful for debugging. Finally, we defined a class called Meta. This class may be used to specify various options related to the data model. Some of the commonly used options are: db_table: This is the name of the table to use for the model. This is useful when the table name generated by Django is a reserved keyword in SQL, or when you want to avoid conflicts if a table with the same name already exists in the database. ordering: This is a list of field names. It declares how objects are ordered when retrieving a list of objects. A column name may be preceded by a minus sign to change the sorting order from ascending to descending. permissions: This lets you declare custom permissions for the data model in addition to add, change, and delete permissions. Permissions should be a list of two-tuples, where each two-tuple should consist of a permission codename and a human-readable name for that permission. For example, you can define a new permission for listing friend bookmarks by using the following Meta class: class Meta: permissions = ( ('can_list_friend_bookmarks', 'Can list friend bookmarks'), ) unique_together: A list of field names that must be unique together. We used the unique_together option here to ensure that a Friendship object is added only once for a particular relationship. There cannot be two Friendship objects with equal to_friend and from_friend fields. This is equivalent to the following SQL declaration: UNIQUE ("from_friend", "to_friend") If you check the SQL generated by Django for this model, you will find something similar to this in the code. After entering the data model code into the bookmarks/models.py file, run the following command to create its corresponding table in the database: $ python manage.py syncdb Now let's experiment with the new model and see how to store and retrieve relations of friendship. Run the interactive console using the following command: $ python manage.py shell Next, retrieve some User objects and build relationships between them (but make sure that you have at least three users in the database): >>> from bookmarks.models import *>>> from django.contrib.auth.models import User>>> user1 = User.objects.get(id=1)>>> user2 = User.objects.get(id=2)>>> user3 = User.objects.get(id=3)>>> friendship1 = Friendship(from_friend=user1, to_friend=user2)>>> friendship1.save()>>> friendship2 = Friendship(from_friend=user1, to_friend=user3)>>> friendship2.save() Now, user2 and user3 are both friends of user1. To retrieve the list of Friendship objects associated with user1, use: >>> user1.friend_set.all()[<Friendship: user1, user2>, <Friendship: user1, user3>] (The actual usernames in output were replaced with user1, user2, and user3 for clarity.) As you may have already noticed, the attribute is named friend_set because we called it so using the related_name option when we created the Friendship model. Next, let's see one way to retrieve the User objects of user1's friends: >>> [friendship.to_friend for friendship in user1.friend_set.all()][<User: user2>, <User: user3>] The last line of code uses a Python feature called "list" comprehension to build the list of User objects. This feature allows us to build a list by iterating through another list. Here, we built the User list by iterating over a list of Friendship objects. If this syntax looks unfamiliar, please refer to the List Comprehension section in the Python tutorial. Notice that user1 has user2 as a friend, but the opposite is not true. >>> user2.friend_set.all() [] In other words, the Friendship model works only in one direction. To add user1 as a friend of user2, we need to construct another Friendship object. >>> friendship3 = Friendship(from_friend=user2, to_friend=user1)>>> friendship3.save()>>> user2.friend_set.all() [<Friendship: user2, user1>] By reversing the arguments passed to the Friendship constructor, we built a relationship in the other way. Now user1 is a friend of user2 and vice-versa. Experiment more with the model to make sure that you understand how it works. Once you feel comfortable with it, move to the next section, where we will write views to utilize the data model. Things will only get more exciting from now on!
Read more
  • 0
  • 0
  • 9485
article-image-creating-your-own-node-module
Soham Kamani
18 Apr 2016
6 min read
Save for later

Creating Your Own Node Module

Soham Kamani
18 Apr 2016
6 min read
Node.js has a great community and one of the best package managers I have ever seen. One of the reasons npm is so great is because it encourages you to make small composable modules, which usually have just one responsibility. Many of the larger, more complex node modules are built by composing smaller node modules. As of this writing, npm has over 219,897 packages. One of the reasons this community is so vibrant is because it is ridiculously easy to make your own node module. This post will go through the steps to create your own node module, as well as some of the best practices to follow while doing so. Prerequisites and Installation node and npm are a given. Additionally, you should also configure your npm author details: npm set init.author.name "My Name" npm set init.author.email "your@email.com" npm set init.author.url "http://your-website.com" npm adduser These are the details that would show up on npmjs.org once you publish. Hello World The reason that I say creating a node module is ridiculously easy is because you only need two files to create the most basic version of a node module. First up, create a package.json file inside of a new folder by running the npm init command. This will ask you to choose a name. Of course, the name you are thinking of might already exist in the npm registry, so to check for this run the command npm ls owner module_name , where module_name is replaced by the namespace you want to check. If it exists, you will get information about the authors: $ npm owner ls forever indexzero <charlie.robbins@gmail.com> bradleymeck <bradley.meck@gmail.com> julianduque <julianduquej@gmail.com> jeffsu <me@jeffsu.com> jcrugzz <jcrugzz@gmail.com> If your namespace is free you would get an error message. Something similar to : $ npm owner ls does_not_exist npm ERR! owner ls Couldnt get owner data does_not_exist npm ERR! Darwin 14.5.0 npm ERR! argv "node" "/usr/local/bin/npm" "owner" "ls" "does_not_exist" npm ERR! node v0.12.4 npm ERR! npm v2.10.1 npm ERR! code E404 npm ERR! 404 Registry returned 404 GET on https://registry.npmjs.org/does_not_exist npm ERR! 404 npm ERR! 404 'does_not_exist' is not in the npm registry. npm ERR! 404 You should bug the author to publish it (or use the name yourself!) npm ERR! 404 npm ERR! 404 Note that you can also install from a npm ERR! 404 tarball, folder, http url, or git url. npm ERR! Please include the following file with any support request: npm ERR! /Users/sohamchetan/Documents/jekyll-blog/npm-debug.log After setting up package.json, add a JavaScript file: module.exports = function(){ return 'Hello World!'; } And that's it! Now execute npm publish . and your node module will be published to npmjs.org. Also, anyone can now install your node module by running npm install --save module_name, where module name is the "name" property contained in package.json. Now anyone can use your module like this : var someModule = require('module_name'); console.log(someModule()); // This will output "Hello World!" Dependencies As stated before, rarely will you find large scale node modules that do not depend on other smaller modules. This is because npm encourages modularity and composability. To add dependancies to your own module, simply install them. For example, one of the most depended upon packages is lodash, a utility library. To add this, run the command : npm install --save lodash Now you can use lodash everywhere in your module by "requiring" it, and when someone else downloads your module, they get lodash bundled along with it as well. Additionally you would want to have some modules purely for development and not for distribution. These are dev-dependencies, and can be installed with the npm install --save-dev command. Dev dependencies will not install when someone else installs your node module. Configuring package.json The package.json file is what contains all the metadata for your node_module. A few fields are filled out automatically (like dependencies or devDependencies during npm installs). There are a few more fields in package.json that you should consider filling out so that your node module is best fitted to its purpose. "main": The relative path of the entry point of your module. Whatever is assigned to module.exports in this file is exported when someone "requires" your module. By default this is the index.js file. "keywords": It’s an array of keywords describing your module. Quite helpful when others from the community are searching for something that your module happens to solve. "license": I normally publish all my packages with an "MIT" licence because of its openness and popularity in the open source community. "version": This is pretty crucial because you cannot publish a node module with the same version twice. Normally, semver versioning should be followed. If you want to know more about the different properties you can set in package.json there's a great interactive guide you can check out. Using Yeoman Generators Although it's really simple to make a basic node module, it can be quite a task to make something substantial using just index.js nd package.json file. In these cases, there's a lot more to do, such as: Writing and running tests. Setting up a CI tool like Travis. Measuring code coverage. Installing standard dev dependencies for testing. Fortunately, there are many Yeoman generators to help you bootstrap your project. Check out generator-nm for setting up a basic project structure for a simple node module. If writing in ES6 is more your style, you can take a look at generator-nm-es6. These generators get your project structure, complete with a testing framework and CI integration so that you don't have to spend all your time writing boilerplate code. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 9226

article-image-languages-and-language-settings-moodle
Packt
22 Oct 2009
7 min read
Save for later

Languages and Language Settings in Moodle

Packt
22 Oct 2009
7 min read
Language The default Moodle installation includes many Language packs. A language pack is a set of translations for the Moodle interface. Language packs translate the Moodle interface, and not the course content. Here's the Front Page of a site when the user selects Spanish from the language menu: Note that every aspect of the interface is being presented in Spanish: menu names, menu items, section names, buttons, and system messages. Now, let's take a look at the same Front Page when the user selects Romanian from the language menu: Note that much of the interface has not been translated. For example, the Site Administration menu and section name for Site news are still in English. When a part of Moodle's interface is not translated into the selected language, Moodle uses the English version. Language Files When you install an additional language, Moodle places the language pack in its data directory under the subdirectory /lang. It creates a subdirectory for each language files. The following screenshot shows the results of installing the International Spanish and Romanian languages: For example, the subdirectory, /lang/en_us, holds files for the U.S. English translation, and /lang/es_es holds the files for traditional Spanish (Espanol / Espana). The name of the subdirectory is the 'language code'. Knowing this code will come in handy later. In the previous example, es_utf8 tells us that the language code for International Spanish is es. Inside a language pack's directory, we see a list of files that contain the translations: For example, the /lang/es_utf8/forum.php file holds text used on the forum pages. Let us suppose that we are creating a course for students. This file would include the text that is displayed to the course creator while creating the forum, and the text that is displayed to the students when they use the forum. Here are the first few lines from the English version of that file: $string['addanewdiscussion'] = 'Add a new discussion topic';$string['addanewtopic'] = 'Add a new topic';$string['advancedsearch'] = 'Advanced search'; And here are the same first three lines from the Spanish version of that file: $string['addanewdiscussion'] = 'Colocar un nuevo tema de discusión aquí';$string['addanewtopic'] = 'Agregar un nuevo tema';$string['advancedsearch'] = 'Búsqueda avanzada'; The biggest task in localizing Moodle consists of translating these language files into the appropriate languages. Some translations are surprisingly complete. For example, most of the interface has been translated to Irish Gaelic, even though this language is used by only about 350,000 people everyday. The Romanian interface remains mostly untranslated although Romania has a population of over 23 million. This means that if a Moodle user chooses the Romanian language (ro), most of the interface will still default to English. Language Settings You access the Language settings page from the Site Administration menu. Default Language and Display Language Menu The Default language setting specifies the language that users will see when they first encounter your site. If you also select Display language menu, users can change the language. Selecting this displays a language menu on your Front Page. Languages on Language Menu and Cache Language Menu The setting Languages on language menu enables you to specify the languages that users can pick from the language menu. There are directions for you to enter the 'language codes'. These codes are the names of the directories which hold the Language packs. In the subsection on Language Files on the previous page, you saw that the directory es_utf8 holds the language files for International Spanish. If you wanted to enter that language in the list, it would look like this: Leaving this field blank will enable your students to pick from all available languages. Entering the names of languages in this field limits the list to only those entered. Sitewide Locale Enter a language code into this field, and the system displays dates in the format appropriate to that language. Excel Encoding Most of the reports that Moodle generates can be downloaded as Excel files. User logs and grades are two examples. This setting lets you choose the encoding for those Excel files. Your choices are Unicode and Latin. The default is Unicode, because this character set includes many more characters other than Latin. In many cases, Latin encoding doesn't offer enough characters to completely represent a non-English language. Offering Courses in Multiple Languages The settings on the Language settings page are also applicable for translating the Moodle interface. However, they are not applicable for translating course content. If you want to offer course content in multiple languages, you have several choices. First, you could put all the different languages into each course. That is, each document would appear in a course in several languages. For example, if you offered a botany course in English and Spanish, you might have a document defining the different types of plants in both English and Spanish, side by side in the same course–Types of Plants or Tipos de Plantaras. While taking the course, students would select the documents in their language. Course names would appear in only one language. Second, you could create separate courses for each language, and offer them on the same site. Course names would appear in each language. In this case, students would select the course in English or Spanish– Basic Botany or Botánica Básica. Third, you could create a separate Moodle site for each language, for example, http://moodle.williamrice.com/english and http://moodle.williamrice.com/spanish. At the Home Page of your site, students would select their language and would be directed to the correct Moodle installation. In this case, the entire Moodle site would appear in the students' language: the site name, the menus, the course names, and the course content. These are things you should consider before installing Moodle. Installing Additional Languages To install additional languages, you must be connected to the Internet. Then, from the Site Administration menu, select Language | Language packs. The page displays a list of all available Language packs: This list is taken from Moodle's /install/lang directory. In that directory, you will find a folder for each language pack that can be installed. The folder contains a file called install.php. That file retrieves the most recent version of the language pack from the Web and installs it. This is why Moodle must be connected to the Web to use this feature. If Moodle is not connected, you will need to download the language pack and copy it into the /lang directory yourself. If you don't see the language you want on the list of Available language packs, either it's not available in the official Moodle site, or your list of available languages is out of date. Click to update this list. If the language doesn't appear, it's not available from official sources. Summary In this article, we have seen how Moodle website supports different languages and how to configure these different languages. This feature can be particularly useful while designing courses for students who come from different ethnic backgrounds. Language support can not only make the website more friendlier but also makes it more easy to browse.
Read more
  • 0
  • 0
  • 9098
Modal Close icon
Modal Close icon