Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-introduction-mastering-javascript-promises-and-its-implementation-angularjs
Packt
23 Jul 2015
21 min read
Save for later

An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js

Packt
23 Jul 2015
21 min read
In this article by Muzzamil Hussain, the author of the book Mastering JavaScript Promises, introduces us to promises in JavaScript and its implementation in Angular.js. (For more resources related to this topic, see here.) For many of us who are working with JavaScript, we all know that working with JavaScript means you must have to be a master is asynchronous coding but this skill doesn't come easily. You have to understand callbacks and when you learn it, a sense of realization started to bother you that managing callbacks is not a very easy task, and it's really not an effective way of asynchronous programming. Those of you who already been through this experience, promises is not that new; even if you haven't used it in your recent project, but you would really want to go for it. For those of you who neither use any of callbacks or promises, understanding promises or seeking difference between callbacks and promise would be a hard task. Some of you have used promises in JavaScript during the use of popular and mature JavaScript libraries such as Node.js, jQuery, or WinRT. You are already aware of the advantages of promises and how it's helping out in making your work efficient and code look beautiful. For all these three classes of professionals, gathering information on promises and its implementation in different libraries is quite a task and much of the time you spent is on collecting the right information about how you can attach an error handler in promise, what is a deferred object, and how it can pass it on to different function. Possession of right information in the time you need is the best virtue one could ask for. Keeping all these elements in mind, we have written a book named Mastering JavaScript Promises. This book is all about JavaScript and how promises are implemented in some of the most renowned libraries of the world. This book will provide a foundation for JavaScript, and gradually, it will take you through the fruitful journey of learning promises in JavaScript. The composition of chapters in this book are engineered in such a way that it provides knowledge from the novice level to an advance level. The book covers a wide range of topics with both theoretical and practical content in place. You will learn about evolution of JavaScript, the programming models of different kinds, the asynchronous model, and how JavaScript uses it. The book will take you right into the implementation mode with a whole lot of chapters based on promises implementation of WinRT, Node.js, Angular.js, and jQuery. With easy-to-follow example code and simple language, you will absorb a huge amount information on this topic. Needless to say, books on such topics are in itself an evolutionary process, so your suggestions are more than welcome. Here are few extracts from the book to give you a glimpse of what we have in store for you in this book, but most of the part in this section will focus on Angular.js and how promises are implemented in it. Let's start our journey to this article with programming models. Models Models are basically templates upon which the logics are designed and fabricated within a compiler/interpreter of a programming language so that software engineers can use these logics in writing their software logically. Every programming language we use is designed on a particular programming model. Since software engineers are asked to solve a particular problem or to automate any particular service, they adopt programming languages as per the need. There is no set rule that assigns a particular language to create products. Engineers adopt any language based on the need. The asynchronous programming model Within the asynchronous programming model, tasks are interleaved with one another in a single thread of control. This single thread may have multiple embedded threads and each thread may contain several tasks linked up one after another. This model is simpler in comparison to the threaded case, as the programmers always know the priority of the task executing at a given slot of time in memory. Consider a task in which an OS (or an application within OS) uses some sort of a scenario to decide how much time is to be allotted to a task, before giving the same chance to others. The behavior of the OS of taking control from one task and passing it on to another task is called preempting. Promise The beauty of working with JavaScript's asynchronous events is that the program continues its execution, even when it doesn't have any value it needs to work that is in progress. Such scenarios are named as yet known values from unfinished work. This can make working with asynchronous events in JavaScript challenging. Promises are a programming construct that represents a value that is still unknown. Promises in JavaScript enable us to write asynchronous code in a parallel manner to synchronous code. How to implement promises So far, we have learned the concept of promise, its basic ingredients, and some of the basic functions it has to offer in nearly all of its implementations, but how are these implementations using it? Well, it's quite simple. Every implementation, either in the language or in the form of a library, maps the basic concept of promises. It then maps it to a compiler/interpreter or in code. This allows the written code or functions to behave in the paradigm of promise, which ultimately presents its implementations. Promises are now part of the standard package for many languages. The obvious thing is that they have implemented it in their own way as per the need. Implementing promises in Angular.js Promise is all about how async behavior can be applied on a certain part of an application or on the whole. There is a list of many other JavaScript libraries where the concept of promises exists but in Angular.js, it's present in a much more efficient way than any other client-side applications. Promises comes in two flavors in Angular.js, one is $q and the other is Q. What is the difference between them? We will explore it in detail in the following sections. For now, we will look at what promise means to Angular.js. There are many possible ways to implement promises in Angular.js. The most common one is to use the $q parameter, which is inspired by Chris Kowal's Q library. Mainly, Angular.js uses this to provide asynchronous methods' implementations. With Angular.js, the sequence of services is top to bottom starting with $q, which is considered as the top class; within it, many other subclasses are embedded, for example, $q.reject() or $q.resolve(). Everything that is related to promises in Angular.js must follow the $q parameters. Starting with the $q.when() method, it seems like it creates a method immediately rather it only normalizes the value that may or may not create the promise object. The usage of $q.when() is based on the value supplied to it. If the value provided is a promise, $q.when() will do its job and if it's not, a promise value, $q.when() will create it. The schematics of using promises in Angular.js Since Chris Kowal's Q library is the global provider and inspiration of promises callback returns, Angular.js also uses it for its promise implementations. Many of Angular.js services are by nature promise oriented in return type by default. This includes $interval, $http, and $timeout. However, there is a proper mechanism of using promises in Angular.js. Look at the following code and see how promises maps itself within Angular.js: var promise = AngularjsBackground(); promise.then( function(response) {    // promise process }, function(error) {    // error reporting }, function(progress) {    // send progress    }); All of the mentioned services in Angular.js return a single object of promise. They might be different in taking parameters in, but in return all of them respond back in a single promise object with multiple keys. For example, $http.get returns a single object when you supply four parameters named data, status, header, and config. $http.get('/api/tv/serials/sherlockHolmes ') .success(function(data, status, headers, config) {    $scope.movieContent = data; }); If we employ the promises concept here, the same code will be rewritten as: var promise = $http.get('/api/tv/serials/sherlockHolmes ') promise.then( function(payload) {    $scope.serialContent = payload.data; }); The preceding code is more concise and easier to maintain than the one before this, which makes the usage of Angular.js more adaptable to the engineers using it. Promise as a handle for callback The implementation of promise in Angular.js defines your use of promise as a callback handle. The implementations not only define how to use promise for Angular.js, but also what steps one should take to make the services as "promise-return". This states that you do something asynchronously, and once your said job is completed, you have to trigger the then() service to either conclude your task or to pass it to another then() method: /asynchronous _task.then().then().done(). In simpler form, you can do this to achieve the concept of promise as a handle for call backs: angular.module('TVSerialApp', []) .controller('GetSerialsCtrl',    function($log, $scope, TeleService) {      $scope.getserialListing = function(serial) {        var promise =          TeleService.getserial('SherlockHolmes');        promise.then(          function(payload) {            $scope.listingData = payload.data;          },          function(errorPayload) {            $log.error('failure loading serial', errorPayload);        });      }; }) .factory('TeleService', function($http) {    return {      getserial: function(id) {        return $http.get(''/api/tv/serials/sherlockHolmes' + id);      }    } }); Blindly passing arguments and nested promises Whatever service of promise you use, you must be very sure of what you are passing and how this can affect the overall working of your promise function. Blindly passing arguments can cause confusion for the controller as it has to deal with its own results too while handling other requests. Say we are dealing with the $http.get service and you blindly pass too much of load to it. Since it has to deal with its own results too in parallel, it might get confused, which may result in callback hell. However, if you want to post-process the result instead, you have to deal with an additional parameter called $http.error. In this way, the controller doesn't have to deal with its own result, and calls such as 404 and redirects will be saved. You can also redo the preceding scenario by building your own promise and bringing back the result of your choice with the payload that you want with the following code: factory('TVSerialApp', function($http, $log, $q) { return {    getSerial: function(serial) {      var deferred = $q.defer();      $http.get('/api/tv/serials/sherlockHolmes' + serial)        .success(function(data) {          deferred.resolve({            title: data.title,            cost: data.price});        }).error(function(msg, code) {            deferred.reject(msg);            $log.error(msg, code);        });        return deferred.promise;    } } }); By building a custom promise, you have many advents. You can control inputs and output calls, log the error messages, transform the inputs into desired outputs, and share the status by using the deferred.notify(mesg) method. Deferred objects or composed promises Since custom promise in Angular.js can be hard to handle sometimes and can fall into malfunction in the worse case, the promise provides another way to implement itself. It asks you to transform your response within a then method and returns a transformed result to the calling method in an autonomous way. Considering the same code we used in the previous section: this.getSerial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes'+ serial)        .then(                function (response) {                    return {                        title: response.data.title,                        cost: response.data.price                      });                  }); }; The output we yield from the preceding method will be a chained, promised, and transformed. You can again reuse the output for another output, chain it to another promise, or simply display the result. The controller can then be transformed into the following lines of code: $scope.getSerial = function(serial) { service.getSerial(serial) .then(function(serialData) {    $scope.serialData = serialData; }); }; This has significantly reduced the lines of code. Also, this helps us in maintaining the service level since the automechanism of failsafe in then() will help it to be transformed into failed promise and will keep the rest of the code intact. Dealing with the nested calls While using internal return values in the success function, promise code can sense that you are missing one most obvious thing: the error controller. The missing error can cause your code to stand still or get into a catastrophe from which it might not recover. If you want to overcome this, simply throw the errors. How? See the following code: this.getserial = function(serial) {    return $http.get('/api/tv/serials/sherlockHolmes' + serial)        .then(            function (response) {                return {                    title: response.data.title,                    cost: response.data.price               });            },            function (httpError) {                // translate the error                throw httpError.status + " : " +                    httpError.data;            }); }; Now, whenever the code enters into an error-like situation, it will return a single string, not a bunch of $http statutes or config details. This can also save your entire code from going into a standstill mode and help you in debugging. Also, if you attached log services, you can pinpoint the location that causes the error. Concurrency in Angular.js We all want to achieve maximum output at a single slot of time by asking multiple services to invoke and get results from them. Angular.js provides this functionality via its $q.all service; you can invoke many services at a time and if you want to join all/any of them, you just need then() to get them together in the sequence you want. Let's get the payload of the array first: [ { url: 'myUr1.html' }, { url: 'myUr2.html' }, { url: 'myUr3.html' } ] And now this array will be used by the following code: service('asyncService', function($http, $q) {      return {        getDataFrmUrls: function(urls) {          var deferred = $q.defer();          var collectCalls = [];          angular.forEach(urls, function(url) {            collectCalls.push($http.get(url.url));          });            $q.all(collectCalls)          .then(            function(results) {            deferred.resolve(              JSON.stringify(results))          },          function(errors) {          deferred.reject(errors);          },          function(updates) {            deferred.update(updates);          });          return deferred.promise;        }      }; }); A promise is created by executing $http.get for each URL and is added to an array. The $q.all function takes the input of an array of promises, which will then process all results into a single promise containing an object with each answer. This will get converted in JSON and passed on to the caller function. The result might be like this: [ promiseOneResultPayload, promiseTwoResultPayload, promiseThreeResultPayload ] The combination of success and error The $http returns a promise; you can define its success or error depending on this promise. Many think that these functions are a standard part of promise—but in reality, they are not as they seem to be. Using promise means you are calling then(). It takes two parameters—a callback function for success and a callback function for failure. Imagine this code: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); }; This can be rewritten as: $http.get("/api/tv/serials/sherlockHolmes") .success(function(name) {    console.log("The tele serial name is : " + name); }) .error(function(response, status) {    console.log("Request failed " + response + " status code: " +     status); };   $http.get("/api/tv/serials/sherlockHolmes") .then(function(response) {    console.log("The tele serial name is :" + response.data); }, function(result) {    console.log("Request failed : " + result); }; One can use either the success or error function depending on the choice of a situation, but there is a benefit in using $http—it's convenient. The error function provides response and status, and the success function provides the response data. This is not considered as a standard part of a promise. Anyone can add their own versions of these functions to promises, as shown in the following code: //my own created promise of success function   promise.success = function(fn) {    promise.then(function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; };   //my own created promise of error function   promise.error = function(fn) {      promise.then(null, function(res) {        fn(res.data, res.status, res.headers, config);    });    return promise; }; The safe approach So the real matter of discussion is what to use with $http? Success or error? Keep in mind that there is no standard way of writing promise; we have to look at many possibilities. If you change your code so that your promise is not returned from $http, when we load data from a cache, your code will break if you expect success or error to be there. So, the best way is to use then whenever possible. This will not only generalize the overall approach of writing promise, but also reduce the prediction element from your code. Route your promise Angular.js has the best feature to route your promise. This feature is helpful when you are dealing with more than one promise at a time. Here is how you can achieve routing through the following code: $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) As you can observe, we have two routes: the api route takes us to the index page, with IndexController, and the video route takes us to the movie's page. app.controller('moviesController', function($scope, MovieService) {    $scope.name = null;      MovieService.getName().then(function(name) {        $scope.name = name;    }); }); There is a problem, until the MovieService class gets the name from the backend, the name is null. This means if our view binds to the name, first it's empty, then its set. This is where router comes in. Router resolves the problem of setting the name as null. Here's how we can do it: var getName = function(MovieService) {        return MovieService.getName();    };   $routeProvider .when('/api/', {      templateUrl: 'index.php',      controller: 'IndexController' }) .when('/video/', {      templateUrl: 'movies.php',      controller: 'moviesController' }) After adding the resolve, we can revisit our code for a controller: app.controller('MovieController', function($scope, getName) {      $scope.name = name;   }); You can also define multiple resolves for the route of your promises to get the best possible output: $routeProvider .when('/video', {      templateUrl: '/MovieService.php',      controller: 'MovieServiceController',      // adding one resole here      resolve: {          name: getName,          MovieService: getMovieService,          anythingElse: getSomeThing      }      // adding another resole here        resolve: {          name: getName,          MovieService: getMovieService,          someThing: getMoreSomeThing      } }) An introduction to WinRT Our first lookout for the technology is WinRT. What is WinRT? It is the short form for Windows Runtime. This is a platform provided by Microsoft to build applications for Windows 8+ operating system. It supports application development in C++/ICX, C# (C sharp), VB.NET, TypeScript, and JavaScript. Microsoft adopted JavaScript as one of its prime and first-class tools to develop cross-browser apps and for the development on other related devices. We are now fully aware of what the pros and cons of using JavaScript are, which has brought us here to implement the use of promise. Summary This article/post is just to give an understanding of what we have in our book for you; focusing on just Angular.js doesn't mean we have only one technology covered in the entire book for implementation of promise, it's just to give you an idea about how the flow of information goes from simple to advanced level, and how easy it is to keep on following the context of chapters. Within this book, we have also learned about Node.js, jQuery, and WinRT so that even readers from different experience levels can read, understand, and learn quickly and become an expert in promises. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Installing jQuery [article] Cordova Plugins [article]
Read more
  • 0
  • 0
  • 7143

article-image-role-management
Packt
23 Jul 2015
15 min read
Save for later

Role Management

Packt
23 Jul 2015
15 min read
In this article by Gavin Henrick and Karen Holland, author of the book Moodle Administration Essentials, roles play a key part in the ability of the Moodle site. They are able to restrict the access of users to only the data they should have access to, and whether or not they are able to alter it or add to it. In each course, every user will have been assigned a role when they are enrolled, such as teacher, student, or customized role. In this article, we deal with the essential areas of role management that every administrator may have to deal with: Cloning a role Creating a new role Creating a course requester role Overriding a permission in a role in a course Testing a role Manually adding a role to a user in a course Enabling self-enrolment for a course (For more resources related to this topic, see here.) Understanding terminologies There are some key terms used to describe users' abilities in Moodle and how they are defined, which are as follows: Role: A role is a set or collection of permissions on different capabilities. There are default roles like teacher and student, which have predefined sets of permissions. Capability: A capability is a specific behavior in Moodle, such as Start new discussions (mod/forum:startdiscussion), which can have a permission set within a role such as Allow or Not set/Inherit. Permission: Permission is associated with a capability. There are four possible values: allow, prevent, prohibit, or not set. Not set: This means that that there is not a specific setting for this user role, and Moodle will determine if it is allowed, if set in a higher context. Allow: The permission is explicitly granted for the capability. Prevent: The permission is removed for the capability, even if allowed in a higher context. However, it can be overridden at a specific context. Prohibit: The permission is completely denied and cannot be overridden at any lower context. By default, the only configuration option displayed is Allow. To show the full list of options in the role edit page, click on the Show advanced button, just above the Filter option, as shown in the following image: Context: A context is an area of Moodle, such as the whole system, a category, a course, an activity, a block, or a user. A role will have permission for a capability on a specific context. An example of this will be where a student can start a discussion in a specific forum. This is set up by enabling a permission to Allow for the capability Start new discussions for a Student role on that specific forum. Standard roles There are a number of different roles configured in Moodle, by default these are: Site administrator: The site administrator can do everything on the site including creating the site structure, courses, activities, and resources, and managing user accounts. Manager: The manager can access courses and modify them. They usually do not participate in teaching courses. Course creator: The course creator can create courses when assigned rights in a category. Teacher: The teacher can do anything within a course, including adding and removing resources and activities, communicating with students, and grading them. Non-editing teacher: The non-editing teacher can teach and communicate in courses and grade students, but cannot alter or add activities, nor change the course layout or settings. Student: The student can access and participate in courses, but cannot create or edit resources or activities within a course. Guest: The guest can view courses if allowed, but cannot participate. Guests have minimal privileges, and usually cannot enter text anywhere. Authenticated user: The role all logged in users get. Authenticated user on the front page role: A logged in user role for the front page only. Managing role permissions Let's learn how to manage permissions for existing roles in Moodle. Cloning a role It is possible to duplicate an existing role in Moodle. The main reasons for doing this will be so that you can have a variation of the existing role, such as a teacher, but with the role having reduced capabilities. For instance, to stop a teacher being able to add or remove students to the course, this process will be achieved by creating a course editing role, which is a clone of the standard editingteacher role with enrolment aspects removed. This is typically done when students are added to courses centrally with a student management system. To duplicate a role, in this case editing teacher: Log in as an administrator level user account. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select an existing role from the Use role or archetype dropdown. Click on Continue. Enter the short role name in the Short name field. This must be unique. Enter the full role name in the Custom full name field. This is what appears on the user interface in Moodle. Enter an explanation for the role in the Description field. This should explain why the role was created, and what changes from default were planned. Scroll to the bottom of the page. Click on Create this role. This will create a duplicate version of the teacher role with all the same permissions and capabilities. Creating a new role It is also possible to create a new role. The main reason for doing this would be to have a specific role to do a specific task and nothing else, such as a user that can manage users only. This is the alternative to cloning one of the existing roles, and then disabling everything except the one set of capabilities required. To create a new role: Log in as an administrator level user account. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select No role from the Use role or archetype dropdown. Click on Continue. Enter the short role name in the Short name field. This must be unique. Enter the full role name in the Custom full name field. This is what appears on the user interface in Moodle. Enter an explanation for the role in the Description field. This should explain why the role was created. Select the appropriate Role archetype, in this case, None. The role archetype determines the permissions when a role is reset to default and any new permissions for the role when the site is upgraded. Select Context types where this role may be assigned. Set the permissions as required by searching for the appropriate Capability and clicking on Allow. Scroll to the bottom of the page. Click on Create this role. This will create the new role with the settings as defined. If you want the new role to appear in the course listing, you must enable it by navigating to Administration block | Site administration | Appearance | Courses | Course Contacts. Creating a course requester role There is a core Moodle feature that enables users to request a course to be created. This is not normally used, especially as students and most teachers just have responsibility within their own course context. So, it can be useful to create a role just with this ability, so that a faculty or department administrator can request a new course space when needed, without giving the ability to all users. There are a few steps in this process: Remove the capability from other roles. Set up the new role. Assign the role to a user at the correct context. Firstly, we remove the capability from other roles by altering the authenticated user role as shown: In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on edit for the Authenticated user role. Enter the request text into Filter. Select the Not set radio button under moodle/course:request to change the Allow permission. Scroll to the bottom of the page. Click on Save changes. Next, we create the new role with the specific capability set to Allow. In the Administration block, navigate to Site administration | Users | Permissions | Define roles. Click on the Add a new role button. Select No role from the Use role or archetype dropdown. Click on Continue. Enter courserequester in the Short name field. Enter Course Requester in the Custom full name field. This is what appears on the user interface in Moodle. Enter the explanation for the role in the Description field. Select system under Context types where this role may be assigned. Change moodle/course:request to Allow. Scroll to the bottom of the page. Click on Create this role. Lastly, you assign the role to a user at system level. This is different from giving a role to a user in a course. In the Administration block, navigate to Site administration | Users | Permissions | Assign system roles. Click on Course Requester. Search for the specific user in the Potential user list. Select the user from the list, using the Search filter if required. Click on the Add button. Any roles you assign from this page will apply to the assigned users throughout the entire system, including the front page and all the courses. Applying a role override for a specific context You can change how a specific role behaves in a certain context by enabling an override, thereby granting or removing, the permission in that context. An example of this is, in general, students cannot rate a forum post in a forum in their course. When ratings are enabled, only the manager, teacher, and non-editing teacher roles are those with permission to rate the posts. So, to enable the students to rate posts, you need to change the permissions for the student role on that specific forum. Browse to the forum where you want to allow students to rate forum posts. This process assumes that the rating has been already enabled in the forum. From the Forum page, go to the Administration block, then to Forum administration, and click on the link to Permissions. Scroll down the page to locate the permission Rate posts. This is the mod/forum:rate capability. By default, you should not see the student role listed to the right of the permission. Click on the plus sign (+) that appears below the roles already listed for the Rate posts permission. Select Student from the Select role menu, and click on the Allow button. Student should now appear in the list next to the Rate posts permission. Participants will now be able to rate each other's posts in this forum. Making the change in this forum does not impact other forums. Testing a role It is possible to use the Switch role to feature to see what the other role behaves like in the different contexts. However, the best way to test a role is to create a new user account, and then assign that user the role in the correct context as follows: Create the new user by navigating to Site administration | Users | Accounts | Add a new user. Assign your new user the role in the correct context, such as system roles or in a course as required. Log in with this user in a different browser to check what they can do / see. Having two different roles logged in at the same time, each using a different browser, means that you can test the new role in one browser while still logged in as the administrator in your main browser. This saves so much time when building courses especially. Manually adding a user to a course Depending on what your role is on a course, you can add other users to the course by manually enrolling them to the course. In this example, we are logged in as the administrator, which can add a number of roles, including: Manager Teacher Non-Editing Teacher Student To enrol a user in your course: Go to the Course administration menu in the Administration block. Expand on the User settings. Click on the Enrolled users link. This brings up the enrolled users page that lists all enrolled users—this can be filtered by role and by default shows the enrolled participants only. Click on the Enrol users button. From the Assign roles dropdown, select which role you want to assign to the user. This is limited to the roles which you can assign. Search for the user that you want to add to the course. Click on the Enrol button to enroll the user with the assigned role. Click on Finish enrolling users. The page will now reload with the new enrolments. To see the users that you added with the given role, you may need to change the filter to the specific role type. This is how you manually add someone to a Moodle course. User upload CSV files allow you to include optional enrolment fields, which will enable you to enroll existing or new users. The sample user upload CSV file will enroll each user as a student to both specified courses, identified by their course shortnames: Teaching with Moodle, and Induction. Enabling self-enrolment for a course In addition to manually adding users to a course, you can configure a course so that students can self-enroll onto the course, either with or without an enrolment key or password. There are two dependencies required for this to work: Firstly, the self-enrolment plugin needs to be enabled at the site level. This is found in the Administration block, by navigating to Site Administration | Plugins | Enrolments | Manage enroll plugins. If it is not enabled, you need to click on the eye icon to enable it. It is enabled by default in Moodle. Secondly, you need to enable the self-enrolment method in the course itself, and configure it accordingly. In the course that you want to enable self-enrolment, the following are the essential steps: In the Administration block, navigate to Administration | Course administration | Users | Enrolment methods. Click on the eye icon to turn on the Self enrolment method. Click on the cogwheel icon to access the configuration for the Self enrolment method. You can optionally enter a name for the method into the Custom instance name field; however, this is not required. Typically, you would do this if you are enabling multiple self-enrolment options and want to identify them separately. Enter an enrolment key or password into the Enrolment key field if you want to restrict self-enrolment to those who are issued the password. Once the user knows the password, they will be able to enroll. If you are using groups in your course, and configure them with different passwords for each group, it is possible to use the Use group enrolment keys option to use those passwords from the different groups to automatically place the self-enrolling users into those groups when they enroll, using the correct key/password. If you want the self-enrolment to enroll users as students, leave the Default assigned role as Student, or change it to whichever role you intend it to operate for. Some organizations will give one password for the students to enrol with and another for the teachers to enrol with, so that the organization does not need to manage the enrolment centrally. So, having two self-enrolment methods set up, one pointing at student and one at teacher, makes this possible. If you want to control the length of enrolment, you can do this by setting the Enrolment duration column. In this case, you can also issue a warning to the user before it expires by using Notify before enrolment expires and Notification threshold options. If you want to specify the length of the enrolment, you can set this with Start date and End date. You can un-enroll the user if they are inactive for a period of time by setting Unenroll to inactive after a specific number of days. Set Max enrolled users if you want to limit the number of users using this specific password to enroll to the course. This is useful if you are selling a specific number of seats on the course. This self-enrolment method may be restricted to members of a specified cohort only. You can enable this by selecting a cohort from the dropdown for the Only cohort members setting. Leave Send course welcome message ticked if you want to send a message to those who self-enroll. This is recommended. Enter a welcome message in the Custom welcome message field. This will be sent to all users who self-enroll using this method, and can be used to remind them of key information about the course, such as a starting date, or asking them to do something like complete the icebreaker in the course. Click on Save changes. Once enabled and configured, users will now be able to self-enroll and will be added as whatever role you selected. This is dependent on the course itself being visible to the users when browsing the site. Other custom roles Moodle docs has a list of potential custom roles with instructions on how to create them including: Parent Demo teacher Forum moderator Forum poster role Calendar editor Blogger Quiz user with unlimited time Question Creator Question sharer Course requester role Feedback template creator Grading forms publisher Grading forms manager Grade view Gallery owner role For more information, check https://docs.moodle.org/29/en/Creating_custom_roles. Summary In this article, we looked at the core administrator tasks in role management, and the different aspects to consider when deciding which approach to take in either extending or reducing role permissions. Resources for Article: Further resources on this subject: Moodle for Online Communities [article] Configuring your Moodle Course [article] Moodle Plugins [article]
Read more
  • 0
  • 0
  • 2102

Packt
23 Jul 2015
9 min read
Save for later

Eloquent… without Laravel!

Packt
23 Jul 2015
9 min read
In this article by, Francesco Malatesta author of the book, Learning Laravel’s Eloquent, we will learn everything about Eloquent, starting from the very basics and going through models, relationships, and other topics. You probably started to like it and think about implementing it in your next project. In fact, creating an application without a single SQL query is tempting. Maybe you also showed it to your boss and convinced him/her to use it in your next production project. However, there is a little problem. Yeah, the next project isn't so new. It already exists, and, despite everything, it doesn't use Laravel! You start to shiver. This is so sad because you passed the last week studying this new ORM, a really cool one, and then moving forward. There is always a solution! You are a developer! Also, the solution is not so hard to find. If you want, you can use Eloquent without Laravel. Actually, Laravel is not a monolithic framework. It is made up of several, separate parts, which are combined together to build something greater. However, nothing prevents you from using only selected packages in another application. (For more resources related to this topic, see here.) So, what are we going to see in this article? First of all, we will explore the structure of the database package and see what is inside it. Then, you will learn how to install the illuminate/database package separately for your project and how to configure it for the first use. Then, you will encounter some examples. First of all, we will look at the Eloquent ORM. You will learn how to define models and use them. Having done this, as a little extra, I will show you how to use the Query Builder (remember that the "illuminate/database" package isn't just Eloquent). Maybe you would also enjoy the Schema Builder class. I will cover it, don't worry! We will cover the following: Exploring the directory structure Installing and configuring the database package Using the ORM Using the Query and Schema Builders Summary Exploring the directory structure As I mentioned before, the key step in order to use Eloquent in your application without Laravel is to use the "illuminate/database" package. So, before we install it, let's examine it a little. You can see the package contents here: https://github.com/illuminate/database. So, this is what you will probably see: Folder Description Capsule The capsule manager is a fundamental component. It instantiates the service container and loads some dependencies. Connectors The database package can communicate with various DB systems. For instance, SQLite, MySQL, or PostgreSQL. Every type of database has its own connector. This is the folder in which you will find them. Console The database package isn't just Eloquent with a bunch of connectors. In this specific folder, you will find everything related to console commands, such as artisan db:seed or artisan migrate. Eloquent Every single Eloquent class is placed here. Migrations Don't confuse this with the Console folder. Every class related to migrations is stored here. When you type artisan migrate in your terminal, you are calling a class that is placed here. Query The Query Builder is placed here. Schema Everything related to the Schema Builder is placed here. In the main folder, you will also find some other files. However, don't worry as you don't need to know what they are. If you open the composer.json file, take a look at the following "require" section: "require": { "php": ">=5.4.0", "illuminate/container": "5.1.*", "illuminate/contracts": "5.1.*", "illuminate/support": "5.1.*", "nesbot/carbon": "~1.0" }, As you can see, the database package has some prerequisites that you can't avoid. However, the container is quite small, and it is the same for contracts (just some interfaces) and "illuminate/support". Eloquent uses Carbon (https://github.com/briannesbitt/Carbon) to deal with dates in a smarter way. So, if you are seeing this for the first time and you are confused, don't worry! Everything is all right. Now that you know what you can find in this package, let's see how to install it and configure it for the first time. Installing and configuring the database package Let's start with the setup. First of all, we will install the package using composer as usual. After that, we will configure the capsule manager in order to get started. Installing the package Installing the "illuminate/database" package is really easy. All you have to do is to add "illuminate/database" to the "require" section of your composer.json file, like this: "require": {     "illuminate/database": "5.0.*",   }, Then type composer update in to your terminal, and wait for a few seconds. Another way is to include it with the shortcut in your project folder, obviously from the terminal: composer require illuminate/database No matter which method you chose, you just installed the package. Configuring the package Time to use the capsule manager! In your project, you will use something like this to get started: use Illuminate\Database\Capsule\Manager as Capsule;   $capsule = new Capsule;   $capsule->addConnection([ 'driver'   => 'mysql', 'host'     => 'localhost', 'database' => 'database', 'username' => 'root', 'password' => 'password', 'charset'   => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix'   => '', ]);     // Set the event dispatcher used by Eloquent models... (optional) use Illuminate\Events\Dispatcher; use Illuminate\Container\Container; $capsule->setEventDispatcher(new Dispatcher(new Container)); The config syntax I used is exactly the same you can find in the config/database.php configuration file. The only difference is that this time you are explicitly using an instance of the capsule manager in order to do everything. In the second part of the code, I am setting up the event dispatcher. You must do this if events are required from your project. However, events are not included by default in this package, so you will have to manually add the "illuminate/events" dependency to your composer.json file. Now, the final step! Add this code to your setup file: // Make this Capsule instance available globally via static methods... (optional) $capsule->setAsGlobal();   // Setup the Eloquent ORM... (optional; unless you've used setEventDispatcher()) $capsule->bootEloquent(); With setAsGlobal() called on the capsule manager, you can set it as a global component in order to use it with static methods. You may like it or not; the choice is yours. The final line starts up Eloquent, so you will need it. However, this is also an optional instruction. In some situations you may need the Query Builder only. Then there is nothing else to do! Your application is now configured with the database package (and Eloquent)! Using the ORM Using the Eloquent ORM in a non-Laravel application is not a big change. All you have to do is to declare your model as you are used to doing. Then, you need to call it and use it as you are used to. Here is a perfect example of what I am talking about: use Illuminate\Database\Eloquent\Model;   class Book extends Model {   ...   // some attributes here… protected $table = 'my_books_table';   // some scopes here... public function scopeNewest() {    // query here... }   ...   } Exactly as you did with Laravel, the package you are using is the same. So, no worries! If you want to use the model you just created, then use the following: $books = Book::newest()->take(5)->get(); This also applies for relationships, observers, and so on. Everything is the same. In order to use the database package and ORM exactly, you would do the same thing you did in Laravel; remember to set up the project structure in a way that follows the PSR-4 autoloading convention. Using the Query and Schema Builder It's not just about the ORM; with the database package, you can also use the Query and the Schema Builders. Let's discover how! The Query Builder The Query Builder is also very easy to use. The only difference, this time, is that you are passing through the capsule manager object, like this: $books = Capsule::table('books')              ->where('title', '=', "Michael Strogoff")              ->first(); However, the result is still the same. Also, if you like the DB facade in Laravel, you can use the capsule manager class in the same way: $book = Capsule::select('select title, pages_count from books where id = ?', array(12)); The Schema Builder Now, without Laravel, you don't have migrations. However, you can still use the Schema Builder without Laravel. Like this: Capsule::schema()->create('books', function($table){     $table->increments(''id');     $table->string(''title'', 30);     $table->integer(''pages_count'');    $table->decimal(''price'', 5, 2);.   $table->text(''description'');     $table->timestamps(); }); Previously, you used to call the create() method of the Schema facade. This time is a little different: you will use the create() method, chained to the schema() method of the Capsule class. Obviously, you can use any Schema class method in this way. For instance, you could call something like following: Capsule::schema()->table('books', function($table){    $table->string('title', 50)->change();    $table->decimal('special_price', 5, 2); }); And you are good to go! Remember that if you want to unlock some Schema Builder-specific features you will need to install other dependencies. For example, do you want to rename a column? You will need the doctrine/dbal dependency package. Summary I decided to add this article because many people ask me how to use Eloquent without Laravel. Mostly because they like the framework, but they can't migrate an already started project in its entirety. Also, I think that it's cool to know, in a certain sense, what you can find under the hood. It is always just about curiosity. Curiosity opens new paths, and you have a choice to solve a problem in a new and more elegant way. In these few pages, I just scratched the surface. I want to give you some advice: explore the code. The best way to write good code is to read good code. Resources for Article: Further resources on this subject: Your First Application [article] Building a To-do List with Ajax [article] Laravel 4 - Creating a Simple CRUD Application in Hours [article]
Read more
  • 0
  • 0
  • 12315

article-image-deployment-and-maintenance
Packt
20 Jul 2015
21 min read
Save for later

Deployment and Maintenance

Packt
20 Jul 2015
21 min read
 In this article by Sandro Pasquali, author of Deploying Node.js, we will learn about the following: Automating the deployment of applications, including a look at the differences between continuous integration, delivery, and deployment Using Git to track local changes and triggering deployment actions via webhooks when appropriate Using Vagrant to synchronize your local development environment with a deployed production server Provisioning a server with Ansible Note that application deployment is a complex topic with many dimensions that are often considered within unique sets of needs. This article is intended as an introduction to some of the technologies and themes you will encounter. Also, note that the scaling issues are part and parcel of deployment. (For more resources related to this topic, see here.) Using GitHub webhooks At the most basic level, deployment involves automatically validating, preparing, and releasing new code into production environments. One of the simplest ways to set up a deployment strategy is to trigger releases whenever changes are committed to a Git repository through the use of webhooks. Paraphrasing the GitHub documentation, webhooks provide a way for notifications to be delivered to an external web server whenever certain actions occur on a repository. In this section, we'll use GitHub webhooks to create a simple continuous deployment workflow, adding more realistic checks and balances. We'll build a local development environment that lets developers work with a clone of the production server code, make changes, and see the results of those changes immediately. As this local development build uses the same repository as the production build, the build process for a chosen environment is simple to configure, and multiple production and/or development boxes can be created with no special effort. The first step is to create a GitHub (www.github.com) account if you don't already have one. Basic accounts are free and easy to set up. Now, let's look at how GitHub webhooks work. Enabling webhooks Create a new folder and insert the following package.json file: {"name": "express-webhook","main": "server.js","dependencies": {"express": "~4.0.0","body-parser": "^1.12.3"}} This ensures that Express 4.x is installed and includes the body-parser package, which is used to handle POST data. Next, create a basic server called server.js: var express = require('express');var app = express();var bodyParser = require('body-parser');var port = process.env.PORT || 8082;app.use(bodyParser.json());app.get('/', function(req, res) {res.send('Hello World!');});app.post('/webhook', function(req, res) {// We'll add this next});app.listen(port);console.log('Express server listening on port ' + port); Enter the folder you've created, and build and run the server with npm install; npm start. Visit localhost:8082/ and you should see "Hello World!" in your browser. Whenever any file changes in a given repository, we want GitHub to push information about the change to /webhook. So, the first step is to create a GitHub repository for the Express server mentioned in the code. Go to your GitHub account and create a new repository with the name 'express-webhook'. The following screenshot shows this: Once the repository is created, enter your local repository folder and run the following commands: git initgit add .git commit -m "first commit"git remote add origin git@github.com:<your username>/express-webhook You should now have a new GitHub repository and a local linked version. The next step is to configure this repository to broadcast the push event on the repository. Navigate to the following URL: https://github.com/<your_username>/express-webhook/settings From here, navigate to Webhooks & Services | Add webhook (you may need to enter your password again). You should now see the following screen: This is where you set up webhooks. Note that the push event is already set as default, and, if asked, you'll want to disable SSL verification for now. GitHub needs a target URL to use POST on change events. If you have your local repository in a location that is already web accessible, enter that now, remembering to append the /webhook route, as in http://www.example.com/webhook. If you are building on a local machine or on another limited network, you'll need to create a secure tunnel that GitHub can use. A free service to do this can be found at http://localtunnel.me/. Follow the instructions on that page, and use the custom URL provided to configure your webhook. Other good forwarding services can be found at https://forwardhq.com/ and https://meetfinch.com/. Now that webhooks are enabled, the next step is to test the system by triggering a push event. Create a new file called readme.md (add whatever you'd like to it), save it, and then run the following commands: git add readme.mdgit commit -m "testing webhooks"git push origin master This will push changes to your GitHub repository. Return to the Webhooks & Services section for the express-webhook repository on GitHub. You should see something like this: This is a good thing! GitHub noticed your push and attempted to deliver information about the changes to the webhook endpoint you set, but the delivery failed as we haven't configured the /webhook route yet—that's to be expected. Inspect the failed delivery payload by clicking on the last attempt—you should see a large JSON file. In that payload, you'll find something like this: "committer": {"name": "Sandro Pasquali","email": "spasquali@gmail.com","username": "sandro-pasquali"},"added": ["readme.md"],"removed": [],"modified": [] It should now be clear what sort of information GitHub will pass along whenever a push event happens. You can now configure the /webhook route in the demonstration Express server to parse this data and do something with that information, such as sending an e-mail to an administrator. For example, use the following code: app.post('/webhook', function(req, res) {console.log(req.body);}); The next time your webhook fires, the entire JSON payload will be displayed. Let's take this to another level, breaking down the autopilot application to see how webhooks can be used to create a build/deploy system. Implementing a build/deploy system using webhooks To demonstrate how to build a webhook-powered deployment system, we're going to use a starter kit for application development. Go ahead and use fork on the repository at https://github.com/sandro-pasquali/autopilot.git. You now have a copy of the autopilot repository, which includes scaffolding for common Gulp tasks, tests, an Express server, and a deploy system that we're now going to explore. The autopilot application implements special features depending on whether you are running it in production or in development. While autopilot is a little too large and complex to fully document here, we're going to take a look at how major components of the system are designed and implemented so that you can build your own or augment existing systems. Here's what we will examine: How to create webhooks on GitHub programmatically How to catch and read webhook payloads How to use payload data to clone, test, and integrate changes How to use PM2 to safely manage and restart servers when code changes If you haven't already used fork on the autopilot repository, do that now. Clone the autopilot repository onto a server or someplace else where it is web-accessible. Follow the instructions on how to connect and push to the fork you've created on GitHub, and get familiar with how to pull and push changes, commit changes, and so on. PM2 delivers a basic deploy system that you might consider for your project (https://github.com/Unitech/PM2/blob/master/ADVANCED_README.md#deployment). Install the cloned autopilot repository with npm install; npm start. Once npm has installed dependencies, an interactive CLI application will lead you through the configuration process. Just hit the Enter key for all the questions, which will set defaults for a local development build (we'll build in production later). Once the configuration is complete, a new development server process controlled by PM2 will have been spawned. You'll see it listed in the PM2 manifest under autopilot-dev in the following screenshot: You will make changes in the /source directory of this development build. When you eventually have a production server in place, you will use git push on the local changes to push them to the autopilot repository on GitHub, triggering a webhook. GitHub will use POST on the information about the change to an Express route that we will define on our server, which will trigger the build process. The build runner will pull your changes from GitHub into a temporary directory, install, build, and test the changes, and if all is well, it will replace the relevant files in your deployed repository. At this point, PM2 will restart, and your changes will be immediately available. Schematically, the flow looks like this: To create webhooks on GitHub programmatically, you will need to create an access token. The following diagram explains the steps from A to B to C: We're going to use the Node library at https://github.com/mikedeboer/node-github to access GitHub. We'll use this package to create hooks on Github using the access token you've just created. Once you have an access token, creating a webhook is easy: var GitHubApi = require("github");github.authenticate({type: "oauth",token: <your token>});github.repos.createHook({"user": <your github username>,"repo": <github repo name>,"name": "web","secret": <any secret string>,"active": true,"events": ["push"],"config": {"url": "http://yourserver.com/git-webhook","content_type": "json"}}, function(err, resp) {...}); Autopilot performs this on startup, removing the need for you to manually create a hook. Now, we are listening for changes. As we saw previously, GitHub will deliver a payload indicating what has been added, what has been deleted, and what has changed. The next step for the autopilot system is to integrate these changes. It is important to remember that, when you use webhooks, you do not have control over how often GitHub will send changesets—if more than one person on your team can push, there is no predicting when those pushes will happen. The autopilot system uses Redis to manage a queue of requests, executing them in order. You will need to manage multiple changes in a way. For now, let's look at a straightforward way to build, test, and integrate changes. In your code bundle, visit autopilot/swanson/push.js. This is a process runner on which fork has been used by buildQueue.js in that same folder. The following information is passed to it: The URL of the GitHub repository that we will clone The directory to clone that repository into (<temp directory>/<commit hash>) The changeset The location of the production repository that will be changed Go ahead and read through the code. Using a few shell scripts, we will clone the changed repository and build it using the same commands you're used to—npm install, npm test, and so on. If the application builds without errors, we need only run through the changeset and replace the old files with the changed files. The final step is to restart our production server so that the changes reach our users. Here is where the real power of PM2 comes into play. When the autopilot system is run in production, PM2 creates a cluster of servers (similar to the Node cluster module). This is important as it allows us to restart the production server incrementally. As we restart one server node in the cluster with the newly pushed content, the other clusters continue to serve old content. This is essential to keeping a zero-downtime production running. Hopefully, the autopilot implementation will give you a few ideas on how to improve this process and customize it to your own needs. Synchronizing local and deployed builds One of the most important (and often difficult) parts of the deployment process is ensuring that the environment an application is being developed, built, and tested within perfectly simulates the environment that application will be deployed into. In this section, you'll learn how to emulate, or virtualize, the environment your deployed application will run within using Vagrant. After demonstrating how this setup can simplify your local development process, we'll use Ansible to provision a remote instance on DigitalOcean. Developing locally with Vagrant For a long while, developers would work directly on running servers or cobble together their own version of the production environment locally, often writing ad hoc scripts and tools to smoothen their development process. This is no longer necessary in a world of virtual machines. In this section, we will learn how to use Vagrant to emulate a production environment within your development environment, advantageously giving you a realistic box to work on testing code for production and isolating your development process from your local machine processes. By definition, Vagrant is used to create a virtual box emulating a production environment. So, we need to install Vagrant, a virtual machine, and a machine image. Finally, we'll need to write the configuration and provisioning scripts for our environment. Go to http://www.vagrantup.com/downloads and install the right Vagrant version for your box. Do the same with VirtualBox here at https://www.virtualbox.org/wiki/Downloads. You now need to add a box to run. For this example, we're going to use Centos 7.0, but you can choose whichever you'd prefer. Create a new folder for this project, enter it, and run the following command: vagrant box add chef/centos-7.0 Usefully, the creators of Vagrant, HashiCorp, provide a search service for Vagrant boxes at https://atlas.hashicorp.com/boxes/search. You will be prompted to choose your virtual environment provider—select virtualbox. All relevant files and machines will now be downloaded. Note that these boxes are very large and may take time to download. You'll now create a configuration file for Vagrant called Vagrantfile. As with npm, the init command quickly sets up a base file. Additionally, we'll need to inform Vagrant of the box we'll be using: vagrant init chef/centos-7.0 Vagrantfile is written in Ruby and defines the Vagrant environment. Open it up now and scan it. There is a lot of commentary, and it makes a useful read. Note the config.vm.box = "chef/centos-7.0" line, which was inserted during the initialization process. Now you can start Vagrant: vagrant up If everything went as expected, your box has been booted within Virtualbox. To confirm that your box is running, use the following code: vagrant ssh If you see a prompt, you've just set up a virtual machine. You'll see that you are in the typical home directory of a CentOS environment. To destroy your box, run vagrant destroy. This deletes the virtual machine by cleaning up captured resources. However, the next vagrant up command will need to do a lot of work to rebuild. If you simply want to shut down your machine, use vagrant halt. Vagrant is useful as a virtualized, production-like environment for developers to work within. To that end, it must be configured to emulate a production environment. In other words, your box must be provisioned by telling Vagrant how it should be configured and what software should be installed whenever vagrant up is run. One strategy for provisioning is to create a shell script that configures our server directly and point the Vagrant provisioning process to that script. Add the following line to Vagrantfile: config.vm.provision "shell", path: "provision.sh" Now, create that file with the following contents in the folder hosting Vagrantfile: # install nvmcurl https://raw.githubusercontent.com/creationix/nvm/v0.24.1/install.sh | bash# restart your shell with nvm enabledsource ~/.bashrc# install the latest Node.jsnvm install 0.12# ensure server default versionnvm alias default 0.12 Destroy any running Vagrant boxes. Run Vagrant again, and you will notice in the output the execution of the commands in our provisioning shell script. When this has been completed, enter your Vagrant box as the root (Vagrant boxes are automatically assigned the root password "vagrant"): vagrant sshsu You will see that Node v0.12.x is installed: node -v It's standard to allow password-less sudo for the Vagrant user. Run visudo and add the following line to the sudoers configuration file: vagrant ALL=(ALL) NOPASSWD: ALL Typically, when you are developing applications, you'll be modifying files in a project directory. You might bind a directory in your Vagrant box to a local code editor and develop in that way. Vagrant offers a simpler solution. Within your VM, there is a /vagrant folder that maps to the folder that Vagrantfile exists within, and these two folders are automatically synced. So, if you add the server.js file to the right folder on your local machine, that file will also show up in your VM's /vagrant folder. Go ahead and create a new test file either in your local folder or in your VM's /vagrant folder. You'll see that file synchronized to both locations regardless of where it was originally created. Let's clone our express-webhook repository from earlier in this article into our Vagrant box. Add the following lines to provision.sh: # install various packages, particularly for gityum groupinstall "Development Tools" -yyum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel-yyum install git -y# Move to shared folder, clone and start servercd /vagrantgit clone https://github.com/sandro-pasquali/express-webhookcd express-webhooknpm i; npm start Add the following to Vagrantfile, which will map port 8082 on the Vagrant box (a guest port representing the port our hosted application listens on) to port 8000 on our host machine: config.vm.network "forwarded_port", guest: 8082, host: 8000 Now, we need to restart the Vagrant box (loading this new configuration) and re-provision it: vagrant reloadvagrant provision This will take a while as yum installs various dependencies. When provisioning is complete, you should see this as the last line: ==> default: Express server listening on port 8082 Remembering that we bound the guest port 8082 to the host port 8000, go to your browser and navigate to localhost:8000. You should see "Hello World!" displayed. Also note that in our provisioning script, we cloned to the (shared) /vagrant folder. This means the clone of express-webhook should be visible in the current folder, which will allow you to work on the more easily accessible codebase, knowing it will be automatically synchronized with the version on your Vagrant box. Provisioning with Ansible Configuring your machines by hand, as we've done previously, doesn't scale well. For one, it can be overly difficult to set and manage environment variables. Also, writing your own provisioning scripts is error-prone and no longer necessary given the existence of provisioning tools, such as Ansible. With Ansible, we can define server environments using an organized syntax rather than ad hoc scripts, making it easier to distribute and modify configurations. Let's recreate the provision.sh script developed earlier using Ansible playbooks: Playbooks are Ansible's configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce or a set of steps in a general IT process. Playbooks are expressed in the YAML format (a human-readable data serialization language). To start with, we're going to change Vagrantfile's provisioner to Ansible. First, create the following subdirectories in your Vagrant folder: provisioningcommontasks These will be explained as we proceed through the Ansible setup. Next, create the following configuration file and name it ansible.cfg: [defaults]roles_path = provisioninglog_path = ./ansible.log This indicates that Ansible roles can be found in the /provisioning folder, and that we want to keep a provisioning log in ansible.log. Roles are used to organize tasks and other functions into reusable files. These will be explained shortly. Modify the config.vm.provision definition to the following: config.vm.provision "ansible" do |ansible|ansible.playbook = "provisioning/server.yml"ansible.verbose = "vvvv"end This tells Vagrant to defer to Ansible for provisioning instructions, and that we want the provisioning process to be verbose—we want to get feedback when the provisioning step is running. Also, we can see that the playbook definition, provisioning/server.yml, is expected to exist. Create that file now: ---- hosts: allsudo: yesroles:- commonvars:env:user: 'vagrant'nvm:version: '0.24.1'node_version: '0.12'build:repo_path: 'https://github.com/sandro-pasquali'repo_name: 'express-webhook' Playbooks can contain very complex rules. This simple file indicates that we are going to provision all available hosts using a single role called common. In more complex deployments, an inventory of IP addresses could be set under hosts, but, here, we just want to use a general setting for our one server. Additionally, the provisioning step will be provided with certain environment variables following the forms env.user, nvm.node_version, and so on. These variables will come into play when we define the common role, which will be to provision our Vagrant server with the programs necessary to build, clone, and deploy express-webhook. Finally, we assert that Ansible should run as an administrator (sudo) by default—this is necessary for the yum package manager on CentOS. We're now ready to define the common role. With Ansible, folder structures are important and are implied by the playbook. In our case, Ansible expects the role location (./provisioning, as defined in ansible.cfg) to contain the common folder (reflecting the common role given in the playbook), which itself must contain a tasks folder containing a main.yml file. These last two naming conventions are specific and required. The final step is creating the main.yml file in provisioning/common/tasks. First, we replicate the yum package loaders (see the file in your code bundle for the full list): ---- name: Install necessary OS programsyum: name={{ item }} state=installedwith_items:- autoconf- automake...- git Here, we see a few benefits of Ansible. A human-readable description of yum tasks is provided to a looping structure that will install every item in the list. Next, we run the nvm installer, which simply executes the auto-installer for nvm: - name: Install nvmsudo: noshell: "curl https://raw.githubusercontent.com/creationix/nvm/v{{ nvm.version }}/install.sh | bash" Note that, here, we're overriding the playbook's sudo setting. This can be done on a per-task basis, which gives us the freedom to move between different permission levels while provisioning. We are also able to execute shell commands while at the same time interpolating variables: - name: Update .bashrcsudo: nolineinfile: >dest="/home/{{ env.user }}/.bashrc"line="source /home/{{ env.user }}/.nvm/nvm.sh" Ansible provides extremely useful tools for file manipulation, and we will see here a very common one—updating the .bashrc file for a user. The lineinfile directive makes the addition of aliases, among other things, straightforward. The remainder of the commands follow a similar pattern to implement, in a structured way, the provisioning directives we need for our server. All the files you will need are in your code bundle in the vagrant/with_ansible folder. Once you have them installed, run vagrant up to see Ansible in action. One of the strengths of Ansible is the way it handles contexts. When you start your Vagrant build, you will notice that Ansible gathers facts, as shown in the following screenshot: Simply put, Ansible analyzes the context it is working in and only executes what is necessary to execute. If one of your tasks has already been run, the next time you try vagrant provision, that task will not run again. This is not true for shell scripts! In this way, editing playbooks and reprovisioning does not consume time redundantly changing what has already been changed. Ansible is a powerful tool that can be used for provisioning and much more complex deployment tasks. One of its great strengths is that it can run remotely—unlike most other tools, Ansible uses SSH to connect to remote servers and run operations. There is no need to install it on your production boxes. You are encouraged to browse the Ansible documentation at http://docs.ansible.com/index.html to learn more. Summary In this article, you learned how to deploy a local build into a production-ready environment and the powerful Git webhook tool was demonstrated as a way of creating a continuous integration environment. Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] API with MongoDB and Node.js [Article] So, what is Node.js? [Article]
Read more
  • 0
  • 0
  • 3015

article-image-rest-apis-social-network-data-using-py2neo
Packt
14 Jul 2015
20 min read
Save for later

REST APIs for social network data using py2neo

Packt
14 Jul 2015
20 min read
In this article wirtten by Sumit Gupta, author of the book Building Web Applications with Python and Neo4j we will discuss and develop RESTful APIs for performing CRUD and search operations over our social network data, using Flask-RESTful extension and py2neo extension—Object-Graph Model (OGM). Let's move forward to first quickly talk about the OGM and then develop full-fledged REST APIs over our social network data. (For more resources related to this topic, see here.) ORM for graph databases py2neo – OGM We discussed about the py2neo in Chapter 4, Getting Python and Neo4j to Talk Py2neo. In this section, we will talk about one of the py2neo extensions that provides high-level APIs for dealing with the underlying graph database as objects and its relationships. Object-Graph Mapping (http://py2neo.org/2.0/ext/ogm.html) is one of the popular extensions of py2neo and provides the mapping of Neo4j graphs in the form of objects and relationships. It provides similar functionality and features as Object Relational Model (ORM) available for relational databases py2neo.ext.ogm.Store(graph) is the base class which exposes all operations with respect to graph data models. Following are important methods of Store which we will be using in the upcoming section for mutating our social network data: Store.delete(subj): It deletes a node from the underlying graph along with its associated relationships. subj is the entity that needs to be deleted. It raises an exception in case the provided entity is not linked to the server. Store.load(cls, node): It loads the data from the database node into cls, which is the entity defined by the data model. Store.load_related(subj, rel_type, cls): It loads all the nodes related to subj of relationship as defined by rel_type into cls and then further returns the cls object. Store.load_indexed(index_name, key,value, cls): It queries the legacy index, loads all the nodes that are mapped by key-value, and returns the associated object. Store.relate(subj, rel_type, obj, properties=None): It defines the relationship between two nodes, where subj and cls are two nodes connected by rel_type. By default, all relationships point towards the right node. Store.save(subj, node=None): It save and creates a given entity/node—subj into the graph database. The second argument is of type Node, which if given will not create a new node and will change the already existing node. Store.save_indexed(index_name,key,value,subj): It saves the given entity into the graph and also creates an entry into the given index for future reference. Refer to http://py2neo.org/2.0/ext/ogm.html#py2neo.ext.ogm.Store for the complete list of methods exposed by Store class. Let's move on to the next section where we will use the OGM for mutating our social network data model. OGM supports Neo4j version 1.9, so all features of Neo4j 2.0 and above are not supported such as labels. Social network application with Flask-RESTful and OGM In this section, we will develop a full-fledged application for mutating our social network data and will also talk about the basics of Flask-RESTful and OGM. Creating object model Perform the following steps to create the object model and CRUD/search functions for our social network data: Our social network data contains two kind of entities—Person and Movies. So as a first step let's create a package model and within the model package let's define a module SocialDataModel.py with two classes—Person and Movie: class Person(object):    def __init__(self, name=None,surname=None,age=None,country=None):        self.name=name        self.surname=surname        self.age=age        self.country=country   class Movie(object):    def __init__(self, movieName=None):        self.movieName=movieName Next, let's define another package operations and two python modules ExecuteCRUDOperations.py and ExecuteSearchOperations.py. The ExecuteCRUDOperations module will contain the following three classes: DeleteNodesRelationships: It will contain one method each for deleting People nodes and Movie nodes and in the __init__ method, we will establish the connection to the graph database. class DeleteNodesRelationships(object):    '''    Define the Delete Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Authenticate and Connect to the Neo4j Graph Database        py2neo.authenticate(host+':'+port, username, password)        graph = Graph('http://'+host+':'+port+'/db/data/')        store = Store(graph)        #Store the reference of Graph and Store.        self.graph=graph        self.store=store      def deletePersonNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('personIndex', 'name', node.name, Person)          #Invoke delete method of store class        self.store.delete(cls[0])      def deleteMovieNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('movieIndex',   'name',node.movieName, Movie)        #Invoke delete method of store class            self.store.delete(cls[0]) Deleting nodes will also delete the associated relationships, so there is no need to have functions for deleting relationships. Nodes without any relationship do not make much sense for many business use cases, especially in a social network, unless there is a specific need or an exceptional scenario. UpdateNodesRelationships: It will contain one method each for updating People nodes and Movie nodes and, in the __init__ method, we will establish the connection to the graph database. class UpdateNodesRelationships(object):    '''      Define the Update Operation on Nodes    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def updatePersonNode(self,oldNode,newNode):        #Get the old node from the Index        cls = self.store.load_indexed('personIndex', 'name', oldNode.name, Person)        #Copy the new values to the Old Node        cls[0].name=newNode.name        cls[0].surname=newNode.surname        cls[0].age=newNode.age        cls[0].country=newNode.country        #Delete the Old Node form Index        self.store.delete(cls[0])       #Persist the updated values again in the Index        self.store.save_unique('personIndex', 'name', newNode.name, cls[0])      def updateMovieNode(self,oldNode,newNode):          #Get the old node from the Index        cls = self.store.load_indexed('movieIndex', 'name', oldNode.movieName, Movie)        #Copy the new values to the Old Node        cls[0].movieName=newNode.movieName        #Delete the Old Node form Index        self.store.delete(cls[0])        #Persist the updated values again in the Index        self.store.save_ unique('personIndex', 'name', newNode.name, cls[0]) CreateNodesRelationships: This class will contain methods for creating People and Movies nodes and relationships and will then further persist them to the database. As with the other classes/ module, it will establish the connection to the graph database in the __init__ method: class CreateNodesRelationships(object):    '''    Define the Create Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server    '''    Create a person and store it in the Person Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createPerson(self,name,surName=None,age=None,country=None):        person = Person(name,surName,age,country)        return person      '''    Create a movie and store it in the Movie Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createMovie(self,movieName):        movie = Movie(movieName)        return movie      '''    Create a relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createFriendRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'FRIEND', endPerson)      '''    Create a TEACHES relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createTeachesRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'TEACHES', endPerson)    '''    Create a HAS_RATED relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createHasRatedRelationship(self,startPerson,movie,ratings):      self.store.relate(startPerson, 'HAS_RATED', movie,{'ratings':ratings})    '''    Based on type of Entity Save it into the Server/ database    '''    def save(self,entity,node):        if(entity=='person'):            self.store.save_unique('personIndex', 'name', node.name, node)        else:            self.store.save_unique('movieIndex','name',node.movieName,node) Next we will define other Python module operations, ExecuteSearchOperations.py. This module will define two classes, each containing one method for searching Person and Movie node and of-course the __init__ method for establishing a connection with the server: class SearchPerson(object):    '''    Class for Searching and retrieving the the People Node from server    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchPerson(self,personName):        cls = self.store.load_indexed('personIndex', 'name', personName, Person)        return cls;   class SearchMovie(object):    '''    Class for Searching and retrieving the the Movie Node from server    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchMovie(self,movieName):        cls = self.store.load_indexed('movieIndex', 'name', movieName, Movie)        return cls; We are done with our data model and the utility classes that will perform the CRUD and search operation over our social network data using py2neo OGM. Now let's move on to the next section and develop some REST services over our data model. Creating REST APIs over data models In this section, we will create and expose REST services for mutating and searching our social network data using the data model created in the previous section. In our social network data model, there will be operations on either the Person or Movie nodes, and there will be one more operation which will define the relationship between Person and Person or Person and Movie. So let's create another package service and define another module MutateSocialNetworkDataService.py. In this module, apart from regular imports from flask and flask_restful, we will also import classes from our custom packages created in the previous section and create objects of model classes for performing CRUD and search operations. Next we will define the different classes or services which will define the structure of our REST Services. The PersonService class will define the GET, POST, PUT, and DELETE operations for searching, creating, updating, and deleting the Person nodes. class PersonService(Resource):    '''    Defines operations with respect to Entity - Person    '''    #example - GET http://localhost:5000/person/Bradley    def get(self, name):        node = searchPerson.searchPerson(name)        #Convert into JSON and return it back        return jsonify(name=node[0].name,surName=node[0].surname,age=node[0].age,country=node[0].country)      #POST http://localhost:5000/person    #{"name": "Bradley","surname": "Green","age": "24","country": "US"}    def post(self):          jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )        person = createOperation.createPerson(attr['name'],attr['surname'],attr['age'],attr['country'])        createOperation.save('person',person)          return jsonify(result='success')    #POST http://localhost:5000/person/Bradley    #{"name": "Bradley1","surname": "Green","age": "24","country": "US"}    def put(self,name):        oldNode = searchPerson.searchPerson(name)        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key] = jsonData[key]            print(key,' = ',jsonData[key] )        newNode = Person(attr['name'],attr['surname'],attr['age'],attr['country'])          updateOperation.updatePersonNode(oldNode[0],newNode)          return jsonify(result='success')      #DELETE http://localhost:5000/person/Bradley1    def delete(self,name):        node = searchPerson.searchPerson(name)        deleteOperation.deletePersonNode(node[0])        return jsonify(result='success') The MovieService class will define the GET, POST, and DELETE operations for searching, creating, and deleting the Movie nodes. This service will not support the modification of Movie nodes because, once the Movie node is defined, it does not change in our data model. Movie service is similar to our Person service and leverages our data model for performing various operations. The RelationshipService class only defines POST which will create the relationship between the person and other given entity and can either be another Person or Movie. Following is the structure of the POST method: '''    Assuming that the given nodes are already created this operation    will associate Person Node either with another Person or Movie Node.      Request for Defining relationship between 2 persons: -        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"person","person.name":"Matthew","relationship": "FRIEND"}    Request for Defining relationship between Person and Movie        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"Movie","movie.movieName":"Avengers","relationship": "HAS_RATED"          "relationship.ratings":"4"}    '''    def post(self, entity,name):        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )          if(entity == 'person'):            startNode = searchPerson.searchPerson(name)            if(attr['entity_type']=='movie'):                endNode = searchMovie.searchMovie(attr['movie.movieName'])                createOperation.createHasRatedRelationship(startNode[0], endNode[0], attr['relationship.ratings'])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='FRIEND'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createFriendRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='TEACHES'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createTeachesRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])        else:            raise HTTPException("Value is not Valid")          return jsonify(result='success') At the end, we will define our __main__ method, which will bind our services with the specific URLs and bring up our application: if __name__ == '__main__':    api.add_resource(PersonService,'/person','/person/<string:name>')    api.add_resource(MovieService,'/movie','/movie/<string:movieName>')    api.add_resource(RelationshipService,'/relationship','/relationship/<string:entity>/<string:name>')    webapp.run(debug=True) And we are done!!! Execute our MutateSocialNetworkDataService.py as a regular Python module and your REST-based services are up and running. Users of this app can use any REST-based clients such as SOAP-UI and can execute the various REST services for performing CRUD and search operations. Follow the comments provided in the code samples for the format of the request/response. In this section, we created and exposed REST-based services using Flask, Flask-RESTful, and OGM and performed CRUD and search operations over our social network data model. Using Neomodel in a Django app In this section, we will talk about the integration of Django and Neomodel. Django is a Python-based, powerful, robust, and scalable web-based application development framework. It is developed upon the Model-View-Controller (MVC) design pattern where developers can design and develop a scalable enterprise-grade application within no time. We will not go into the details of Django as a web-based framework but will assume that the readers have a basic understanding of Django and some hands-on experience in developing web-based and database-driven applications. Visit https://docs.djangoproject.com/en/1.7/ if you do not have any prior knowledge of Django. Django provides various signals or triggers that are activated and used to invoke or execute some user-defined functions on a particular event. The framework invokes various signals or triggers if there are any modifications requested to the underlying application data model such as pre_save(), post_save(), pre_delete, post_delete, and a few more. All the functions starting with pre_ are executed before the requested modifications are applied to the data model, and functions starting with post_ are triggered after the modifications are applied to the data model. And that's where we will hook our Neomodel framework, where we will capture these events and invoke our custom methods to make similar changes to our Neo4j database. We can reuse our social data model and the functions defined in ExploreSocialDataModel.CreateDataModel. We only need to register our event and things will be automatically handled by the Django framework. For example, you can register for the event in your Django model (models.py) by defining the following statement: signals.pre_save.connect(preSave, sender=Male) In the previous statement, preSave is the custom or user-defined method, declared in models.py. It will be invoked before any changes are committed to entity Male, which is controlled by the Django framework and is different from our Neomodel entity. Next, in preSave you need to define the invocations to the Neomodel entities and save them. Refer to the documentation at https://docs.djangoproject.com/en/1.7/topics/signals/ for more information on implementing signals in Django. Signals in Neomodel Neomodel also provides signals that are similar to Django signals and have the same behavior. Neomodel provides the following signals: pre_save, post_save, pre_delete, post_delete, and post_create. Neomodel exposes the following two different approaches for implementing signals: Define the pre..() and post..() methods in your model itself and Neomodel will automatically invoke it. For example, in our social data model, we can define def pre_save(self) in our Model.Male class to receive all events before entities are persisted in the database or server. Another approach is using Django-style signals, where we can define the connect() method in our Neomodel Model.py and it will produce the same results as in Django-based models: signals.pre_save.connect(preSave, sender=Male) Refer to http://neomodel.readthedocs.org/en/latest/hooks.html for more information on signals in Neomodel. In this section, we discussed about the integration of Django with Neomodel using Django signals. We also talked about the signals provided by Neomodel and their implementation approach. Summary Here we learned about creating web-based applications using Flask. We also used Flasks extensions such as Flask-RESTful for creating/exposing REST APIs for data manipulation. Finally, we created a full blown REST-based application over our social network data using Flask, Flask-RESTful, and py2neo OGM. We also learned about Neomodel and its various features and APIs provided to work with Neo4j. We also discussed about the integration of Neomodel with the Django framework. Resources for Article: Further resources on this subject: Firebase [article] Developing Location-based Services with Neo4j [article] Learning BeagleBone Python Programming [article]
Read more
  • 0
  • 0
  • 6793

article-image-creating-subtle-ui-details-using-midnightjs-wowjs-and-animatecss
Roberto González
10 Jul 2015
9 min read
Save for later

Creating subtle UI details using Midnight.js, Wow.js, and Animate.css

Roberto González
10 Jul 2015
9 min read
Creating animations in CSS or JavaScript is often annoying and/or time-consuming, so most people tend to pay a lot of attention to the content that’s below "the fold" ("the fold" is quickly becoming an outdated concept, but you know what I mean). I’ll be covering a few techniques to help you add some nice touches to your landing pages that only take a few minutes to implement and require pretty much no development work at all. To create a base for this project, I put together a bunch of photographs from https://unsplash.com/ with some text on top so we have something to work with. Download the files from http://aerolab.github.io/subtle-animations/assets/basics.zip and put them in a new folder. You can also check out the final result at http://aerolab.github.io/subtle-animations. Dynamically change your fixed headers using Midnight.js If you took a look at the demo site, you probably noticed that the minimalistic header we are using for "A How To Guide" becomes illegible in very light backgrounds. When this happens in most sites, we typically end up putting a background on the header, which usually improves legibility at the cost of making the design worse. Midnight.js is a jQuery plugin that changes your headers as you scroll, so the header always has a design that matches the content below it. This is particularly useful for minimalistic websites as they often use transparent headers. Implementation is quite simple as the setup is pretty much automatic. Start by adding a fixed header into the site. The example has one ready to go: <nav class="fixed"> <div class="container"> <span class="logo">A How To Guide</span> </div> </nav> Most of the setting up comes in specifying which header corresponds to which section. This is done by adding data-midnight="your-class" to any section or piece of content that requires a different design for the header. For the first section, we’ll be using a white header, so we’ll add data-midnight="white" to this section (it doesn’t have to be only a section, any large element works well). <section class="fjords" data-midnight="white"> <article> <h1>Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> In the next section, which is a photo of ships in very thick white fog, we’ll be using a darker header to help improve contrast. Let’s use data-midnight="gray" for the second one and data-midgnight="pink" for the last one, so it feels more in line with the content: <section class="ships" data-midnight="gray"> <article> <h1>Be quiet</h1> <p>I'm hunting wabbits</p> </article> </section> <section class="puppy" data-midnight="pink"> <article> <h1>OMG A PUPPY &lt;3</h1> </article> </section> Now we just need to add some css rules to change the look of the header in those cases. We’ll just be changing the color of the text for the moment, so open up css/styles.css and add the following rules: /* Styles for White, Gray and Pink headers */.midnightHeader.white { color: #fff; } .midnightHeader.gray { color: #999; } .midnightHeader.pink { color: #ffc0cb; } Last but not least, we need to include the necessary libraries. We’ll add two libraries right before the end of the body: jQuery and Midnight.js (they are included in the project files inside the js folder): <script src="js/jquery-1.11.1.min.js"></script> <script src="js/midnight.jquery.min.js"></script> Right after that, we start Midnight.js on document.ready, using $('nav.fixed').midnight() (you can change the selector to whatever you are using on your site): <script> $(document).ready(function(){ $('nav.fixed').midnight(); }); </script> If you check the site now, you’ll notice that the fixed header gracefully changes color when you start scrolling into the ships section. It’s a very subtle effect, but it helps keep your designs clean. Bonus Feature! It’s possible to completely change the markup of your header just for a specific section. It’s mostly used to add some visual details that require extra markup, but it can be used to completely alter your headers as necessary. In this case, we’ll be changing the “logo" from "A How To Guide" to "Shhhhhhhhh" on the ships section, and a bunch of hearts for the part of the puppy for additional bad comedy. To do this, we need to alter our fixed header a bit. First we need to identify the “default" header (all headers that don't have custom markup will be based on this one), and then add the markup we need for any custom headers, like the gray one. This is done by creating multiple copies of the header and wrapping them in .midnightHeader.default,.midnightHeader.gray and .midnightHeader.pink respectively: <nav class="fixed"> <div class="midnightHeader default"> <div class="container"> <span class="logo">A How To Guide</span> </div> </div> <div class="midnightHeader gray"> <div class="container"> <span class="logo">Shhhhhhhhh</span> </div> </div> <div class="midnightHeader pink"> <div class="container"> <span class="logo">❤❤❤ OMG PUPPIES ❤❤❤</span> </div> </div> </nav> If you test the site now, you’ll notice that the header not only changes color, but it also changes the "name" of the site to match the section, which gives you more freedom in terms of navigation and design. Simple animations with Wow.js and Animate.css Wow.js looks more like a toy than a serious plugin, but it’s actually a very powerful library that’s extremely easy to implement. Wow.js lets you animate things as they come into view. For instance, you can fade something in when you scroll to that section, letting users enjoy some extra UI candy. You can choose from a large set of animations from Animate.css so you don’t even have to touch the CSS (but you can still do that if you want). To get Wow.JS to work, we have to include just two things: Animate.css, which contains all the animations we need. Of course, you can create your own, or even tweak those to match your tastes. Just add a link to animate.css in the head of the document: <linkrel="stylesheet"href="css/animate.css"/> Wow.JS. This is simply just including the script and initializing it, which is done by adding the following just before the end of the document: <script src="js/wow.min.js"></script> <script>new WOW().init()</script> That’s it! To animate an element as soon as it gets into view, you just need to add the .wow class to that element, and then any animation from Animate.css (like .fadeInUp, .slideInLeft, or one of the many options available at http://daneden.github.io/animate.css/). For example, to make something fade in from the bottom of the screen, you just have to add wow fadeInUp. Let’s try this on the h1 our first section: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> If you feel like altering the animation slightly, you have quite a bit of control over how it behaves. For instance, let’s fade in the subtitle but do it a few milliseconds after the title, so it follows a sequence. We can use data-wow-delay="0.5s" to make the subtitle wait for half a second before making its appearance: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p class="wow fadeInUp" data-wow-delay="0.5s">Using Midnight.js, Wow.js and Animate.css</p> </article> </section> We can even tweak how long the animation takes by using data-wow-duration="1.5s" so it lasts a second and a half. This is particularly useful in the second section, combined with another delay: <section class="ships" data-midnight="gray"> <article> <h1 class="wow fadeIn" data-wow-duration="1.5s">Be quiet</h1> <p class="wow fadeIn" data-wow-delay="0.5s" data-wow-duration="1.5s">I'm hunting wabbits</p> </article> </section> We can even repeat an animation a few times. Let’s make the last title shake a few times as soon as it gets into view with data-wow-iteration="5". We'll take this opportunity to use all the properties, like data-wow-duration="0.5s" to make each shake last half a second, and we'll also add a large delay for the last piece so it appears after the main animation has finished: <section class="puppy"> <article> <h1 class="wow shake" data-wow-iteration="5" data-wow-duration="0.5s">OMG A PUPPY &lt;3</h1> <p class="wow fadeIn" data-wow-delay="2.5s">Ok, this one wasn't subtle at all</p> </article> </section> Summary That’s pretty much all there is to know about using Midnight.js, Wow.js and Animate.css! All you need to do now is find a project and experiment a bit with different animations. It’s a great tool to add some last-minute eye candy and - as long as you don’t overdo it - looks fantastic on most sites. I hope you enjoyed the article! About the author Roberto González is the co-founder of Aerolab, "an awesome place where we really push the barriers to create amazing, well coded design for the best digital products."He can be reached at @robertcode. From the 11th to 17th April, save 50% on top web development eBooks and 70% on our specially selected video courses. From Angular 2 to React and much more, find them all here.
Read more
  • 0
  • 0
  • 7221
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-responsive-web-design-wordpress
Packt
09 Jul 2015
13 min read
Save for later

Responsive Web Design with WordPress

Packt
09 Jul 2015
13 min read
Welcome to the world of the Responsive Web Design! This article is written by Dejan Markovic, author of the book WordPress Responsive Theme Design, and it will introduce you to the Responsive Web Design and its concepts and techniques. It will also present crisp notes from WordPress Responsive Theme Design. (For more resources related to this topic, see here.) Responsive web design (RWD) is a web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from mobile phones to desktop computer monitors). Reference: http://en.wikipedia.org/wiki/Responsive_web_design. To say it simply, responsive web design (RWD) means that the responsive website should adapt to the screen size of the device it is being viewed on. When I began my web development journey in 2002, we didn't have to consider as many factors as we do today. We just had to create the website for a 17-inch screen (which was the standard at that time), and that was it. Yes, we also had to consider 15, 19, and 21-inch monitors, but since the 17-inch screen was the standard, that was the target screen size for us. In pixels, these sizes were usually 800 or 1024. We also had to consider a fewer number of browsers (Internet Explorer, Netscape, and Opera) and the styling for the print, and that was it. Since then, a lot of things have changed, and today, in 2015, for a website design, we have to consider multiple factors, such as: A lot of different web browsers (Internet Explorer, Firefox, Opera, Chrome, and Safari) A number of different operating systems (Windows (XP, 7, and 8), Mac OS X, Linux, Unix, iOS, Android, and Windows phones) Device screen sizes (desktop, mobile, and tablet) Is content accessible and readable with screen readers? How the content will look like when it's printed? Today, creating different design for all these listed factors & devices would take years. This is where a responsive web design comes to the rescue. The concepts of RWD I have to point out that the mobile environment is becoming more important factor than the desktop environment. Mobile browsing is becoming bigger than the desktop-based access, which makes the mobile environment very important factor to consider when developing a website. Simply put, the main point of RWD is that the layout changes based on the size and capabilities of the device its being viewed on. The concepts of RWD, that we will learn next, are: Viewport, scaling and screen density. Controlling Viewport On the desktop, Viewport is the screen size of the window in a browser. For example, when we resize the browser window, we are actually changing the Viewport size. On mobile devices, the Viewport size is also independent of the device screen size. For example, Viewport is 850 px for mobile Opera and 980 px for mobile Safari, and the screen size for iPhone is 320 px. If we compare the Viewport size of 980 px and the screen size of an iPhone of 320 px, we can see that Viewport is bigger than the screen size. This is because mobile browsers function differently. They first load the page into Viewport, and then they resize it to the device's screen size. This is why we are able to see the whole page on the mobile device. If the mobile browsers had Viewport the same as the screen size (320 px), we would be able to see only a part of the page on the mobile device. In the following screenshot, we can see the table with the list of Viewport sizes for some iPhone models: We can control Viewport with CSS: @viewport {width: device-width;} Or, we can control it with the meta tag: <meta name="viewport" content="width=device-width"> In the preceding code, we are matching the Viewport width with the device width. Because the Viewport meta tag approach is more widely adopted, as it was first used on iOS and the @viewport approach was not supported by some browsers, we will use the meta tag approach. We are setting the Viewport width in order to match our web content with our mobile content, as we want to make sure that our web content looks good on a mobile device as well. We can set Viewports in the code for each device separately, for example, 320 px for the iPhone. The better approach will be to use content="width=device-width". Scaling Scaling is extremely important, as the initial scale controls the zoom aspect of the content for the initial look of the page. For example, if the initial scale is set to 3, the content will be loaded in the size of 3 times of the Viewport size, which means 3 times zoom. Here is the look of the screenshot for initial-scale=1 and initial-scale=3: As we can see from the preceding screenshots, on the initial scale 3 (three times zoom), the logo image takes the bigger part of the screen. It is important to note that this is just the initial scale, which means that the user can zoom in and zoom out later, if they want to. Here is the example of the code with the initial scale: <meta name="viewport" content="width=device-width, initial- scale=1, maximum-scale=1"> In this example, we have used the maximum-scale=1 option, which means that the user will not be able to use the zoom here. We should avoid using the maximum-scale property because of accessibility issues. If we forbid zooming on our pages, users with visual problems will not be able to see the content properly. The screen density As the screen technology is going forward every year or even faster than that, we have to consider the screen density aspect as well. Screen density is the number of pixels that are contained within a screen area. This means that if the screen density is higher, we can have more details, in this case, pixels in the same area. There are two measurements that are usually used for this, dots per inch (DPI) and pixels per inch (PPI). DPI means how many drops a printer can place in an inch of a space. PPI is the number of pixels we can have in one inch of the screen. If we go back to the preceding screenshot with the table where we are showing Viewports and densities and compare the values of iPhone 3G and iPhone 4S, we will see that the screen size stayed the same at 3.5 inch, Viewport stayed the same at 320 px, but the screen density has doubled, from 163 dpi to 326 dpi, which means that the screen resolution also has doubled from 320x480 to 640x960. The screen density is very relevant to RWD, as newer devices have bigger densities and we should do our best to cover as many densities as we can in order to provide a better experience for end users. Pixels' density matters more than the resolution or screen size, because more pixels is equal to sharper display: There are topics that need to be taken into consideration, such as hardware, reference pixels, and the device-pixel-ratio, too. Problems and solutions with the screen density Scalable vector graphics and CSS graphics will scale to the resolution. This is why I recommend using Font Awesome icons in your project. Font Awesome icons are available for download at: http://fortawesome.github.io/Font-Awesome/icons/. Font Icons is a font that is made up of symbols, icons, or pictograms (whatever you prefer to call them) that you can use in a webpage just like a font. They can be instantly customized with properties like: size, drop shadow, or anything you want can be done with the power of CSS. The real problem triggered by the change in the screen density is images, as for high-density screens, we should provide higher resolution images. There are several ways through which we can approach this problem: By targeting high-density screens (providing high-resolution images to all screens) By providing high-resolution images where appropriate (loading high-resolution images only on devices with high-resolution screens) By not using high-resolution images For the beginner developers I will recommend using second approach, providing high-resolution images where appropriate. Techniques in RWD RWD consists of three coding techniques: Media queries (adapt content to specific screen sizes) Fluid grids (for flexible layouts) Flexible images and media (that respond to changes to screen sizes) More detailed information about RWD techniques by Ethan Marcote, who coined the term Reponsive Web Design, is available at http://alistapart.com/article/responsive-web-design. Media queries Media queries are CSS modules, or as some people like to say, just a conditional statements, which are telling tells the browsers to use a specific type of style, depending on the size of the screen and other factors, such as print (specific styles for print). They are here for a long time already, as I was using different styles for print in 2002. If you wish to know more about media queries, refer to W3C Candidate Recommendation 8 July 2002 at http://www.w3.org/TR/2002/CR-css3-mediaqueries-20020708/. Here is an example of media query declaration: @media only screen and (min-width:500px) { font-family: sans-serif; } Let's explain the preceding code: The @media code means that it is a media type declaration. The screen and part of the query is an expression or condition (in this case, it means only screen and no print). The following conditional statement means that everything above 500 px will have the font family of sans serif: (min-width:500px) { font-family: sans-serif; } Here is another example of a media query declaration: @media only screen and (min-width: 500px), screen and (orientation: portrait) { font-family: sans-serif; } In this case, if we have two statements and if one of the statements is true, the entire declaration is applied (either everything above 50 px or the portrait orientation will be applied to the screen). The only keyword hides the styles from older browsers. As some older browsers don't support media queries, I recommend using a respond.js script, which will "patch" support for them. Polyfill (or polyfiller) is code that provides features that are not built or supported by some web browsers. For example, a number of HTML5 features are not supported by older versions of IE (older than 8 or 9), but these features can be used if polyfill is installed on the web page. This means that if the developer wants to use these features, he/she can just include that polyfill library and these features will work in older browsers. Breakpoints Breakpoint is a moment when layout switches, from one layout to another, when some condition is fulfilled, for example, the screen has been resized. Almost all responsive designs cover the changes of the screen between the desktop, tablets, and smart phones. Here is an example with comments inside: @media only screen and (max-width: 480px) { //mobile styles // up to 480px size } Media query in the preceding code will only be used if the width of the screen is 480 px or less. @media only screen and (min-width:481px) and (max-width: 768px) { //tablet styles //between 481 and 768px } Media query in the preceding code will only be used if the width of the screen is between the 481 px and 768 px. @media only screen and (min-width:769px) { //desktop styles //from 769px and up } Media query in the preceding code will only be used when the width of the screen is 769 px and more. The minimum width value in desktop styles is 1 pixel over the maximum width value in tablet styles, and the same difference is there between values from tablet and mobile styles. We are doing this in order to avoid overlapping, as that could cause problem with our styles. There is also an approach to set the maximum width and minimum width with em values. Setting em of the screen for maximum will mean that the width of the screen is set relative to the device's font size. If the font size for the device is 16 px (which is the usual size), the maximum width for mobile styles would be 480/16=30. Why do we use em values? With pixel sizes, everything is fixed; for example, h1 is 19 px (or 1.5 em of the default size of 16 px), and that's it. With em sizes, everything is relative, so if we change the default value in the browser from, for example, 16 px to 18 px, everything relative to that will change. Therefore, all h1 values will change from 19 px to 22 px and make our layout "zoomable". Here is the example with sizes changed to em: @media only screen and (max-width: 30em) { //mobile styles // up to 480px size }   @media only screen and (min-width:30em) and (max-width: 48em) { //tablet styles //between 481 and 768px }   @media only screen and (min-width:48em) { //desktop styles //from 769px and up } Fluid grids The major point in RWD is that the content should adapt to any screen it's viewed on. One of the best solutions to do this is to use fluid layouts where our content can be resized on each breakpoint. In fluid grids, we define a maximum layout size for the design. The grid is divided into a specific number of columns to keep the layout clean and easy to handle. Then we design each element with proportional widths and heights instead of pixel based dimensions. So whenever the device or screen size is changed, elements will adjust their widths and heights by the specified proportions to its parent container. Reference: http://www.1stwebdesigner.com/tutorials/fluid-grids-in-responsive-design/. To make the grid flexible (or elastic), we can use the % points, or we can use the em values, whichever suits us better. We can make our own fluid grids, or we can use grid frameworks. As there are so many frameworks available, I would recommend that you use the existing framework rather than building your own. Grid frameworks could use a single grid that covers various screen sizes, or we can have multiple grids for each of the break points or screen size categories, such as mobiles, tablets, and desktops. Some of the notable frameworks are Twitter's Bootstrap, Foundation, and SemanticUI. I prefer Twitter's Bootstrap, as it really helps me speed up the process and it is the most used framework currently. Flexible images and media Last but not the least important, are images and media (videos). The problem with them is that they are elements that come with fixed sizes. There are several approaches to fix this: Replacing dimensions with percentage values Using maximum widths Using background images only for some cases, as these are not good for accessibility Using some libraries, such as Scott Jehl's picturefill (https://github.com/scottjehl/picturefill) Taking out the width and height parameters from the image tag and dealing with dimensions in CSS Summary In this article, you learned about the RWD concepts such as: Viewport, scaling and the screen density. Also, we have covered the RWD techniques: media queries, fluid grids, and flexible media. Resources for Article: Further resources on this subject: Deployment Preparations [article] Why Meteor Rocks! [article] Clustering and Other Unsupervised Learning Methods [article]
Read more
  • 0
  • 0
  • 29501

article-image-how-to-build-remote-controlled-tv-node-webkit
Roberto González
08 Jul 2015
14 min read
Save for later

How to build a Remote-controlled TV with Node-Webkit

Roberto González
08 Jul 2015
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html // This is the starting point for our desktop app - css // Our desktop app styles - js // This is where the magic happens - remote // This is where the magic happens (Part 2) - libraries // FFMPEG libraries, which give you H.264 video support in Node-Webkit - player // Our youtube player - Gruntfile.js // Build scripts - run.bat // run.bat runs the app on Windows - run.sh // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -g sudo npm install grunt-cli -g  On Windows: npm install node-gyp -g npm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: { "//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!", "name": "Remote", "//": "A simple description of what the app does.", "description": "An example of node-webkit", "//": "This is the first html the app will load. Just leave this this way", "main": "app://host/index.html", "//": "The version number. 0.0.1 is a good start :D", "version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.", "window": { "//": "The Window Title for the app", "title": "Remote", "//": "The Icon for the app", "icon": "css/images/icon.png", "//": "Do you want the File/Edit/Whatever toolbar?", "toolbar": false, "//": "Do you want a standard window around your app (a title bar and some borders)?", "frame": true, "//": "Can you resize the window?", "resizable": true}, "webkit": { "plugin": false, "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36" }, "//": "These are the libraries we’ll be using:", "//": "Express is a web server, which will handle the files for the remote", "//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.", "dependencies": { "express": "^4.9.5", "socket.io": "^1.1.0" }, "//": "And these are just task handlers to make things easier", "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-copy": "^0.6.0", "grunt-node-webkit-builder": "^0.1.21" } } You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm install grunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html> <head> <metacharset="utf-8"/> <title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/> <linkhref="css/normalize.css"rel="stylesheet"type="text/css"/> <linkhref="css/styles.css"rel="stylesheet"type="text/css"/> </head> <body> <divid="serverInfo"> <h1>Youtube TV</h1> </div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script> <script src="js/app.js"></script> </body> </html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express'); var app = express(); var server = require('http').Server(app); var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080; server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) { // video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video Youtube.watchVideo(video); }); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () { Youtube.playVideo(); }); socket.on('pause', function () { Youtube.pauseVideo(); }); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) { io.emit('statusChange', status); }; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html> <head> <metacharset=“utf-8”/> <title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/> <linkrel="stylesheet"href="/css/styles.css"/> </head> <body> <divclass="controls"> <divclass="search"> <inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/> </div> <divclass="playback"> <buttonclass="play">&gt;</button> <buttonclass="pause">||</button> </div> </div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"> <articleclass="video"> <figure><imgsrc=""alt=""/></figure> <divclass="info"> <h2></h2> </div> </article> </div> <script src="/socket.io/socket.io.js"></script> <script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script> <script src="/js/remote.js"></script> </body> </html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null; $('#searchQuery').on('keyup', function(event){ clearTimeout(searchTimeout); searchTimeout = setTimeout(function(){ searchYoutube($('#searchQuery').val()); }, 500); }); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){ // Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data()); }); // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){ if( status ==='play' ) { $('.playback .pause').show(); $('.playback .play').hide(); } elseif( status ==='pause'|| status ==='stop' ) { $('.playback .pause').hide(); $('.playback .play').show(); } }); // Notify the app when we hit the play button$('.playback .play').on('click', function(event){ socket.emit('play'); }); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){ socket.emit('pause'); }); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) { require('dns').lookup( require('os').hostname(), function (err, add, fam) { typeof callback =='function'? callback(add) :null; }); } // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){ $('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote'); }); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 6647

article-image-why-meteor-rocks
Packt
08 Jul 2015
23 min read
Save for later

Why Meteor Rocks!

Packt
08 Jul 2015
23 min read
In this article by Isaac Strack, the author of the book, Getting Started with Meteor.js JavaScript Framework - Second Edition, has discussed some really amazing features of Meteor that has contributed a lot to the success of Meteor. Meteor is a disruptive (in a good way!) technology. It enables a new type of web application that is faster, easier to build, and takes advantage of modern techniques, such as Full Stack Reactivity, Latency Compensation, and Data On The Wire. (For more resources related to this topic, see here.) This article explains how web applications have changed over time, why that matters, and how Meteor specifically enables modern web apps through the above-mentioned techniques. By the end of this article, you will have learned: What a modern web application is What Data On The Wire means and how it's different How Latency Compensation can improve your app experience Templates and Reactivity—programming the reactive way! Modern web applications Our world is changing. With continual advancements in displays, computing, and storage capacities, things that weren't even possible a few years ago are now not only possible but are critical to the success of a good application. The Web in particular has undergone significant change. The origin of the web app (client/server) From the beginning, web servers and clients have mimicked the dumb terminal approach to computing where a server with significantly more processing power than a client will perform operations on data (writing records to a database, math calculations, text searches, and so on), transform the data and render it (turn a database record into HTML and so on), and then serve the result to the client, where it is displayed for the user. In other words, the server does all the work, and the client acts as more of a display, or a dumb terminal. This design pattern for this is called…wait for it…the client/server design pattern. The diagrammatic representation of the client-server architecture is shown in the following diagram: This design pattern, borrowed from the dumb terminals and mainframes of the 60s and 70s, was the beginning of the Web as we know it and has continued to be the design pattern that we think of when we think of the Internet. The rise of the machines (MVC) Before the Web (and ever since), desktops were able to run a program such as a spreadsheet or a word processor without needing to talk to a server. This type of application could do everything it needed to, right there on the big and beefy desktop machine. During the early 90s, desktop computers got even more beefy. At the same time, the Web was coming alive, and people started having the idea that a hybrid between the beefy desktop application (a fat app) and the connected client/server application (a thin app) would produce the best of both worlds. This kind of hybrid app—quite the opposite of a dumb terminal—was called a smart app. Many business-oriented smart apps were created, but the easiest examples can be found in computer games. Massively Multiplayer Online games (MMOs), first-person shooters, and real-time strategies are smart apps where information (the data model) is passed between machines through a server. The client in this case does a lot more than just display the information. It performs most of the processing (or acts as a controller) and transforms the data into something to be displayed (the view). This design pattern is simple but very effective. It's called the Model View Controller (MVC) pattern. The model is essentially the data for an application. In the context of a smart app, the model is provided by a server. The client makes requests to the server for data and stores that data as the model. Once the client has a model, it performs actions/logic on that data and then prepares it to be displayed on the screen. This part of the application (talking to the server, modifying the data model, and preparing data for display) is called the controller. The controller sends commands to the view, which displays the information. The view also reports back to the controller when something happens on the screen (a button click, for example). The controller receives the feedback, performs the logic, and updates the model. Lather, rinse, repeat! Since web browsers were built to be "dumb clients", the idea of using a browser as a smart app back then was out of question. Instead, smart apps were built on frameworks such as Microsoft .NET, Java, or Macromedia (now Adobe) Flash. As long as you had the framework installed, you could visit a web page to download/run a smart app. Sometimes, you could run the app inside the browser, and sometimes, you would download it first, but either way, you were running a new type of web app where the client application could talk to the server and share the processing workload. The browser grows up Beginning in the early 2000s, a new twist on the MVC pattern started to emerge. Developers started to realize that, for connected/enterprise "smart apps", there was actually a nested MVC pattern. The server code (controller) was performing business logic against the database (model) through the use of business objects and then sending processed/rendered data to the client application (a "view"). The client was receiving this data from the server and treating it as its own personal "model". The client would then act as a proper controller, perform logic, and send the information to the view to be displayed on the screen. So, the "view" for the server MVC was the "model" for the client MVC. As browser technologies (HTML and JavaScript) matured, it became possible to create smart apps that used the Nested MVC design pattern directly inside an HTML web page. This pattern makes it possible to run a full-sized application using only JavaScript. There is no longer any need to download multiple frameworks or separate apps. You can now get the same functionality from visiting a URL as you could previously by buying a packaged product. A giant Meteor appears! Meteor takes modern web apps to the next level. It enhances and builds upon the nested MVC design pattern by implementing three key features: Data On The Wire through the Distributed Data Protocol (DDP) Latency Compensation with Mini Databases Full Stack Reactivity with Blaze and Tracker Let's walk through these concepts to see why they're valuable, and then, we'll apply them to our Lending Library application. Data On The Wire The concept of Data On The Wire is very simple and in tune with the nested MVC pattern; instead of having a server process everything, render content, and then send HTML across the wire, why not just send the data across the wire and let the client decide what to do with it? This concept is implemented in Meteor using the Distributed Data Protocol, or DDP. DDP has a JSON-based syntax and sends messages similar to the REST protocol. Additions, deletions, and changes are all sent across the wire and handled by the receiving service/client/device. Since DDP uses WebSockets rather than HTTP, the data can be pushed whenever changes occur. But the true beauty of DDP lies in the generic nature of the communication. It doesn't matter what kind of system sends or receives data over DDP—it can be a server, a web service, or a client app—they all use the same protocol to communicate. This means that none of the systems know (or care) whether the other systems are clients or servers. With the exception of the browser, any system can be a server, and without exception, any server can act as a client. All the traffic looks the same and can be treated in a similar manner. In other words, the traditional concept of having a single server for a single client goes away. You can hook multiple servers together, each serving a discreet purpose, or you can have a client connect to multiple servers, interacting with each one differently. Think about what you can do with a system like that: Imagine multiple systems all coming together to create, for example, a health monitoring system. Some systems are built with C++, some with Arduino, some with…well, we don't really care. They all speak DDP. They send and receive data on the wire and decide individually what to do with that data. Suddenly, very difficult and complex problems become much easier to solve. DDP has been implemented in pretty much every major programming language, allowing you true freedom to architect an enterprise application. Latency Compensation Meteor employs a very clever technique called Mini Databases. A mini database is a "lite" version of a normal database that lives in the memory on the client side. Instead of the client sending requests to a server, it can make changes directly to the mini database on the client. This mini database then automatically syncs with the server (using DDP of course), which has the actual database. Out of the box, Meteor uses MongoDB and Minimongo: When the client notices a change, it first executes that change against the client-side Minimongo instance. The client then goes on its merry way and lets the Minimongo handlers communicate with the server over DDP. If the server accepts the change, it then sends out a "changed" message to all connected clients, including the one that made the change. If the server rejects the change, or if a newer change has come in from a different client, the Minimongo instance on the client is corrected, and any affected UI elements are updated as a result. All of this doesn't seem very groundbreaking, but here's the thing—it's all asynchronous, and it's done using DDP. This means that the client doesn't have to wait until it gets a response back from the server. It can immediately update the UI based on what is in the Minimongo instance. What if the change was illegal or other changes have come in from the server? This is not a problem as the client is updated as soon as it gets word from the server. Now, what if you have a slow internet connection or your connection goes down temporarily? In a normal client/server environment, you couldn't make any changes, or the screen would take a while to refresh while the client waits for permission from the server. However, Meteor compensates for this. Since the changes are immediately sent to Minimongo, the UI gets updated immediately. So, if your connection is down, it won't cause a problem: All the changes you make are reflected in your UI, based on the data in Minimongo. When your connection comes back, all the queued changes are sent to the server, and the server will send authorized changes to the client. Basically, Meteor lets the client take things on faith. If there's a problem, the data coming in from the server will fix it, but for the most part, the changes you make will be ratified and broadcast by the server immediately. Coding this type of behavior in Meteor is crazy easy (although you can make it more complex and therefore more controlled if you like): lists = new Mongo.Collection("lists"); This one line declares that there is a lists data model. Both the client and server will have a version of it, but they treat their versions differently. The client will subscribe to changes announced by the server and update its model accordingly. The server will publish changes, listen to change requests from the client, and update its model (its master copy) based on these change requests. Wow, one line of code that does all that! Of course, there is more to it, but that's beyond the scope of this article, so we'll move on. To better understand Meteor data synchronization, see the Publish and subscribe section of the meteor documentation at http://docs.meteor.com/#/full/meteor_publish. Full Stack Reactivity Reactivity is integral to every part of Meteor. On the client side, Meteor has the Blaze library, which uses HTML templates and JavaScript helpers to detect changes and render the data in your UI. Whenever there is a change, the helpers re-run themselves and add, delete, and change UI elements, as appropriate, based on the structure found in the templates. These functions that re-run themselves are called reactive computations. On both the client and the server, Meteor also offers reactive computations without having to use a UI. Called the Tracker library, these helpers also detect any data changes and rerun themselves accordingly. Because both the client and the server are JavaScript-based, you can use the Tracker library anywhere. This is defined as isomorphic or full stack reactivity because you're using the same language (and in some cases the same code!) on both the client and the server. Re-running functions on data changes has a really amazing benefit for you, the programmer: you get to write code declaratively, and Meteor takes care of the reactive part automatically. Just tell Meteor how you want the data displayed, and Meteor will manage any and all data changes. This declarative style is usually accomplished through the use of templates. Templates work their magic through the use of view data bindings. Without getting too deep, a view data binding is a shared piece of data that will be displayed differently if the data changes. Let's look at a very simple data binding—one for which you don't technically need Meteor—to illustrate the point. Let's perform the following set of steps to understand the concept in detail: In LendLib.html, you will see an HTML-based template expression: <div id="categories-container">      {{> categories}}   </div> This expression is a placeholder for an HTML template that is found just below it: <template name="categories">    <h2 class="title">my stuff</h2>.. So, {{> categories}} is basically saying, "put whatever is in the template categories right here." And the HTML template with the matching name is providing that. If you want to see how data changes will affect the display, change the h2 tag to an h4 tag and save the change: <template name="categories">    <h4 class="title">my stuff</h4> You'll see the effect in your browser. (my stuff will become itsy bitsy.) That's view data binding at work. Change the h4 tag back to an h2 tag and save the change, unless you like the change. No judgment here...okay, maybe a little bit of judgment. It's ugly, and tiny, and hard to read. Seriously, you should change it back before someone sees it and makes fun of you! Alright, now that we know what a view data binding is, let's see how Meteor uses it. Inside the categories template in LendLib.html, you'll find even more templates: <template name="categories"> <h4 class="title">my stuff</h4> <div id="categories" class="btn-group">    {{#each lists}}      <div class="category btn btn-primary">        {{Category}}      </div>    {{/each}} </div> </template> Meteor uses a template language called Spacebars to provide instructions inside templates. These instructions are called expressions, and they let us do things like add HTML for every record in a collection, insert the values of properties, and control layouts with conditional statements. The first Spacebars expression is part of a pair and is a for-each statement. {{#each lists}} tells the interpreter to perform the action below it (in this case, it tells it to make a new div element) for each item in the lists collection. lists is the piece of data, and {{#each lists}} is the placeholder. Now, inside the {{#each lists}} expression, there is one more Spacebars expression: {{Category}} Since the expression is found inside the #each expression, it is considered a property. That is to say that {{Category}} is the same as saying this.Category, where this is the current item in the for-each loop. So, the placeholder is saying, "add the value of the Category property for the current record." Now, if we look in LendLib.js, we will see the reactive values (called reactive contexts) behind the templates: lists : function () { return lists.find(... Here, Meteor is declaring a template helper named lists. The helper, lists, is found inside the template helpers belonging to categories. The lists helper happens to be a function that returns all the data in the lists collection, which we defined previously. Remember this line? lists = new Mongo.Collection("lists"); This lists collection is returned by the above-mentioned helper. When there is a change to the lists collection, the helper gets updated and the template's placeholder is changed as well. Let's see this in action. On your web page pointing to http://localhost:3000, open the browser console and enter the following line: > lists.insert({Category:"Games"}); This will update the lists data collection. The template will see this change and update the HTML code/placeholder. Each of the placeholders will run one additional time for the new entry in lists, and you'll see the following screen: When the lists collection was updated, the Template.categories.lists helper detected the change and reran itself (recomputed). This changed the contents of the code meant to be displayed in the {{> categories}} placeholder. Since the contents were changed, the affected part of the template was re-run. Now, take a minute here and think about how little we had to do to get this reactive computation to run: we simply created a template, instructing Blaze how we want the lists data collection to be displayed, and we put in a placeholder. This is simple, declarative programming at its finest! Let's create some templates We'll now see a real-life example of reactive computations and work on our Lending Library at the same time. Adding categories through the console has been a fun exercise, but it's not a long-term solution. Let's make it so that we can do that on the page instead as follows: Open LendLib.html and add a new button just before the {{#each lists}} expression: <div id="categories" class="btn-group"> <div class="category btn btn-primary" id="btnNewCat">    <span class="glyphicon glyphicon-plus"></span> </div> {{#each lists}} This will add a plus button on the page, as follows: Now, we want to change the button into a text field when we click on it. So let's build that functionality by using the reactive pattern. We will make it based on the value of a variable in the template. Add the following {{#if…else}} conditionals around our new button: <div id="categories" class="btn-group"> {{#if new_cat}} {{else}}    <div class="category btn btn-primary" id="btnNewCat">      <span class="glyphicon glyphicon-plus"></span>    </div> {{/if}} {{#each lists}} The first line, {{#if new_cat}}, checks to see whether new_cat is true or false. If it's false, the {{else}} section is triggered, and it means that we haven't yet indicated that we want to add a new category, so we should be displaying the button with the plus sign. In this case, since we haven't defined it yet, new_cat will always be false, and so the display won't change. Now, let's add the HTML code to display when we want to add a new category: {{#if new_cat}} <div class="category form-group" id="newCat">      <input type="text" id="add-category" class="form-control" value="" />    </div> {{else}} ... {{/if}} There's the smallest bit of CSS we need to take care of as well. Open ~/Documents/Meteor/LendLib/LendLib.css and add the following declaration: #newCat { max-width: 250px; } Okay, so now we've added an input field, which will show up when new_cat is true. The input field won't show up unless it is set to true; so, for now, it's hidden. So, how do we make new_cat equal to true? Save your changes if you haven't already done so, and open LendLib.js. First, we'll declare a Session variable, just below our Meteor.isClient check function, at the top of the file: if (Meteor.isClient) { // We are declaring the 'adding_category' flag Session.set('adding_category', false); Now, we'll declare the new template helper new_cat, which will be a function returning the value of adding_category. We need to place the new helper in the Template.categories.helpers() method, just below the declaration for lists: Template.categories.helpers({ lists: function () {    ... }, new_cat: function(){    //returns true if adding_category has been assigned    //a value of true    return Session.equals('adding_category',true); } }); Note the comma (,) on the line above new_cat. It's important that you add that comma, or your code will not execute. Save these changes, and you'll see that nothing has changed. Ta-da! In reality, this is exactly as it should be because we haven't done anything to change the value of adding_category yet. Let's do this now: First, we'll declare our click event handler, which will change the value in our Session variable. To do this, add the following highlighted code just below the Template.categories.helpers() block: Template.categories.helpers({ ... }); Template.categories.events({ 'click #btnNewCat': function (e, t) {    Session.set('adding_category', true);    Tracker.flush();    focusText(t.find("#add-category")); } }); Now, let's take a look at the following line of code: Template.categories.events({ This line declares that events will be found in the category template. Now, let's take a look at the next line: 'click #btnNewCat': function (e, t) { This tells us that we're looking for a click event on the HTML element with an id="btnNewCat" statement (which we already created in LendLib.html). Session.set('adding_category', true); Tracker.flush(); focusText(t.find("#add-category")); Next, we set the Session variable, adding_category = true, flush the DOM (to clear up anything wonky), and then set the focus onto the input box with the id="add-category" expression. There is one last thing to do, and that is to quickly add the focusText(). helper function. To do this, just before the closing tag for the if (Meteor.isClient) function, add the following code: /////Generic Helper Functions///// //this function puts our cursor where it needs to be. function focusText(i) { i.focus(); i.select(); }; } //<------closing bracket for if(Meteor.isClient){} Now, when you save the changes and click on the plus button, you will see the input box: Fancy! However, it's still not useful, and we want to pause for a second and reflect on what just happened; we created a conditional template in the HTML page that will either show an input box or a plus button, depending on the value of a variable. This variable is a reactive variable, called a reactive context. This means that if we change the value of the variable (like we do with the click event handler), then the view automatically updates because the new_cat helpers function (a reactive computation) will rerun. Congratulations, you've just used Meteor's reactive programming model! To really bring this home, let's add a change to the lists collection (which is also a reactive context, remember?) and figure out a way to hide the input field when we're done. First, we need to add a listener for the keyup event. Or, to put it another way, we want to listen when the user types something in the box and hits Enter. When this happens, we want to add a category based on what the user typed. To do this, let's first declare the event handler. Just after the click handler for #btnNewCat, let's add another event handler: 'click #btnNewCat': function (e, t) {    ... }, 'keyup #add-category': function (e,t){    if (e.which === 13)    {      var catVal = String(e.target.value || "");      if (catVal)      {        lists.insert({Category:catVal});        Session.set('adding_category', false);      }    } } We add a "," character at the end of the first click handler, and then add the keyup event handler. Now, let's check each of the lines in the preceding code: This line checks to see whether we hit the Enter/Return key. if (e.which === 13) This line of code checks to see whether the input field has any value in it: var catVal = String(e.target.value || ""); if (catVal) If it does, we want to add an entry to the lists collection: lists.insert({Category:catVal}); Then, we want to hide the input box, which we can do by simply modifying the value of adding_category: Session.set('adding_category', false); There is one more thing to add and then we'll be done. When we click away from the input box, we want to hide it and bring back the plus button. We already know how to do this reactively, so let's add a quick function that changes the value of adding_category. To do this, add one more comma after the keyup event handler and insert the following event handler: 'keyup #add-category': function (e,t){ ... }, 'focusout #add-category': function(e,t){    Session.set('adding_category',false); } Save your changes, and let's see this in action! In your web browser on http://localhost:3000, click on the plus sign, add the word Clothes, and hit Enter. Your screen should now resemble the following screenshot: Feel free to add more categories if you like. Also, experiment by clicking on the plus button, typing something in, and then clicking away from the input field. Summary In this article, you learned about the history of web applications and saw how we've moved from a traditional client/server model to a nested MVC design pattern. You learned what smart apps are, and you also saw how Meteor has taken smart apps to the next level with Data On The Wire, Latency Compensation, and Full Stack Reactivity. You saw how Meteor uses templates and helpers to automatically update content, using reactive variables and reactive computations. Lastly, you added more functionality to the Lending Library. You made a button and an input field to add categories, and you did it all using reactive programming rather than directly editing the HTML code. Resources for Article: Further resources on this subject: Building the next generation Web with Meteor [article] Quick start - creating your first application [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article]
Read more
  • 0
  • 0
  • 4916

article-image-materials-and-why-they-are-essential
Packt
08 Jul 2015
11 min read
Save for later

Materials, and why they are essential

Packt
08 Jul 2015
11 min read
In this article by Ciro Cardoso, author of the book Lumion3D Best Practices, we will see materials, and why they are essential. In the 3D world, materials and textures are nearly as important as the 3D geometry that composes the scene. A material defines the optical properties of an object when hit by a ray of light. In other words, a material defines how the light interacts with the surface, and textures can help not only to control the color (diffuse), but also the reflections and glossiness. (For more resources related to this topic, see here.) It's not difficult to understand that textures are another essential part of a good material, and if your goal is to achieve believable results, you need textures or images of real elements like stone, wood, brick, and other natural elements. Textures can bring detail to your surface that otherwise would require geometry to look good. In that case, how can Lumion help you and, most importantly, what are the best practices to work with materials? Let's have a look at the following section which will provide the answer. A quick overview of Lumion's materials Lumion always had a good library of materials to assign to your 3D model, The reality is that Physically-Based Rendering (PBR) is more of a concept than a set of rules, and each render engine will implement slightly differently. The good news for us as users is that these materials follow realistic shading and lighting systems to accurately represent real-world materials. You can find excellent information regarding PBR on the following sites: http://www.marmoset.co/toolbag/learn/pbr-theory http://www.marmoset.co/toolbag/learn/pbr-practice https://www.allegorithmic.com/pbr-guide More than 600 materials are already prepared to be assigned directly to your 3D model and, by default, they should provide a realistic and appropriate material. The Lumion team has also made an effort to create a better and simpler interface, as you can see in the following screenshot: The interface was simplified, showing only the most common and essential settings. If you need more control over the material, click on the More… button to have access to extra functionalities. One word of caution: the material preview, which in this case is the sphere, will not reflect the changes you perform using the settings available. For example, if you change the main texture, the sphere will continue to show the previous material. A good practice to tweak materials is to assign the material to the surface, use the viewport to check how the settings are affecting the material, and then do a quick render. The viewport will try to show the final result, but there's nothing like a quick render to see how the material really looks when Lumion does all the lighting and shading calculations. Working with materials in Lumion – three options There are three options to work with materials in Lumion: Using Lumion's materials Using the imported materials that you can create on your favorite modeling application Creating materials using Lumion's Standard material Let's have a look at each one of these options and see how they can help you and when they best suit your project. Using Lumion's materials The first option is obvious; you are using Lumion and it makes sense using Lumion's materials, but you may feel constrained by what is available at Lumion's material library. However, instead of thinking, "I only have 600 materials and I cannot find what I need!", you need to look at the materials library also as a template to create other materials. For example, if none of the brick materials is similar to what you need, nothing stops you from using a brick material, changing the Gloss and Reflection values, and loading a new texture, creating an entirely new material. This is made possible by using the Choose Color Map button, as shown in the following screenshot: When you click on the Choose Color Map button, a new window appears where you can navigate to the folder where the texture is saved. What about the second square? The one with a purple color? Let's see the answer in the following section. Normal maps and their functionality The purple square you just saw is where you can load the normal map. And what is a normal map? Firstly, a normal map is not a bump map. A bump map uses a color range from black to white and in some ways is more limited than a normal map. The following screenshots show the clear difference between these two maps: The map on the left is a bump map and you can see that the level of detail is not the same that we can get with a normal map. A normal map consists of red, green, and blue colors that represent the x, y, and z coordinates. This allows a 2D image to represent depth and Lumion uses this depth information to fake lighting details based on the color associated with the 3D coordinate. The perks of using a normal map Why should you use normal maps? Keep in mind that Lumion is a real-time rendering engine and, as you saw previously, there is the need to keep a balance between detail and geometry. If you add too much detail, the 3D model will look gorgeous but Lumion's performance will suffer drastically. On the other hand, you can have a low-poly 3D model and fake detail with a normal map. Using a normal map for each material has a massive impact on the final quality you can get with Lumion. Since these maps are so important, how can you create one? Tips to create normal maps As you will understand, we cannot cover all the different techniques to create normal maps. However, you may find something to suit your workflow in the following list: Photoshop using an action script called nDo: Teddy Bergsman is the author of this fantastic script. It is a free script that creates a very accurate normal map of any texture you load in Photoshop in seconds. To download and see how to install this script, visit the following link: http://www.philipk.net/ndo.html Here you can find a more detailed tutorial on how to use this nDo script: http://www.philipk.net/tutorials/ndo/ndo.html This script has three options to create normal maps. The default option is Smooth, which gives you a blurry normal map. Then you have the Chisel Hard option to generate a very sharp and subtle normal map but you don't have much control over the final result. The Chisel Soft option is similar to the Chisel Hard except that you have full control over the intensity and bevel radius. This script also allows you to sculpt and combine several normal maps. Using the Quixel NDO application: From the same creator, we have a more capable and optimized application called Quixel NDO. With this application, you can sculpt normal maps in real-time, build your own normal maps without using textures, and preview everything with the 3DO material preview. This is quite useful because you don't have to save the normal map and see how it looks in Lumion. 3DO (which comes free with NDO) has a physically based renderer and lets you load a 3D model to see how the texture looks. Find more information including a free trial here: http://quixel.se/dev/ndo GIMP with the normalmap plugin: If you want to use free software, a good alternative is GIMP. There is a great plugin called normalmap, which does good work not only by creating a normal map but also by providing a preview window to see the tweaks you are making. To download this plugin, visit the following link: https://code.google.com/p/gimp-normalmap/ Do it online with NormalMap-Online: In case you don't want to install another application, the best option is doing it online. In that case, you may want to have a look at NormalMap-Online, as shown in the following screenshot: The process is extremely simple as you can see from the preceding screenshot. You load the image and automatically get a normal map, and on the right-hand side there is a preview to show how the normal map and the texture work together. Christian Petry is the man behind this tool that will help to create sharp and accurate normal maps. He is a great guy and if you like this online application, please consider supporting an application that will save you time and money. Find this online tool here: http://cpetry.github.io/NormalMap-Online/ Don't forget to use a good combination of Strength and Blur/Sharp to create a well-balanced map. You need the correct amount of detail; otherwise your normal map will be too noisy in terms of detail. However, Lumion being a user-friendly application gives you a hand on this topic by providing a tool to create a normal map automatically from a texture you import. Creating a normal map with Lumion's relief feature By now, creating a normal map from a texture is not something too technical or even complex, but it can be time consuming if you need to create a normal map for each texture. This is a wise move because it will remove the need for extra detail for the model to look good. With this in mind, Lumion's team created a new feature that allows you to create a normal map for any texture you import. After loading the new texture, the next step is to click on the Create Normal Map button, as highlighted in the following screenshot: Lumion then creates a normal map based on the texture imported, and you have the ability to invert the map by clicking on the Flip Normal Map direction button, as highlighted in the preceding screenshot. Once Lumion creates the normal map, you need a way to control how the normal map affects the material and the light. For that, you need to use the Relief slider, as shown in the following screenshot: Using this slider is very intuitive; you only need to move the slider and see the adjustments on the viewport, since the material preview will not be updated. The previous screenshot is a good example of that, because even when we loaded a wood texture, the preview still shows a concrete material. Again, this means you can easily use the settings from one material and use that as a base to create something completely new. But how good is the normal map that Lumion creates for you? Have a look for yourself in the following screenshot: On the left hand side, we have a wood floor material with a normal map that Lumion created. The right-hand side image is the same material but the normal map was created using the free nDo script for Photoshop. There is a big difference between the image on the left and the image on the right, and that is related to the normal maps used in this case. You can see clearly that the normal map used for the image on the right achieves the goal of bringing more detail to the surface. The difference is that the normal map that Lumion creates in some situations is too blurry, and for that reason we end up losing detail. Before we explore a few more things regarding creating custom materials in Lumion, let's have a look at another useful feature in Lumion. Summary Physically based rendering materials aren't that scary, don't you agree? In reality, Lumion makes this feature almost unnoticeable by making it so simple. You learned what this feature involves and how you can take full advantage of materials that make your render more believable. You learned the importance of using normal maps and how to create them using a variety of tools for all flavors. You also saw how we can easily improve material reflections without compromising the speed and quality of the render. You also learned another key aspect of Lumion: flexibility to create your own materials using the Standard material. The Standard material, although slightly different from the other materials available in Lumion, lets you play with the reflections, glossiness, and other settings that are essential. On top of all of this, you learned how to create textures. Resources for Article: Further resources on this subject: Unleashing the powers of Lumion [article] Mastering Lumion 3D [article] What is Lumion? [article]
Read more
  • 0
  • 0
  • 12207
article-image-file-sharing
Packt
08 Jul 2015
14 min read
Save for later

File Sharing

Packt
08 Jul 2015
14 min read
In this article by Dan Ristic, author of the book Learning WebRTC, we will cover the following topics: Getting a file with File API Setting up our page Getting a reference to a file The real power of a data channel comes when combining it with other powerful technologies from a browser. By opening up the power to send data peer-to-peer and combining it with a File API, we could open up all new possibilities in your browser. This means you could add file sharing functionalities that are available to any user with an Internet connection. The application that we will build will be a simple one with the ability to share files between two peers. The basics of our application will be real-time, meaning that the two users have to be on the page at the same time to share a file. There will be a finite number of steps that both users will go through to transfer an entire file between them: User A will open the page and type a unique ID. User B will open the same page and type the same unique ID. The two users can then connect to each other using RTCPeerConnection. Once the connection is established, one user can select a file to share. The other user will be notified of the file that is being shared, where it will be transferred to their computer over the connection and they will download the file. The main thing we will focus on throughout the article is how to work with the data channel in new and exciting ways. We will be able to take the file data from the browser, break it down into pieces, and send it to the other user using only the RTCPeerConnection API. The interactivity that the API promotes will stand out in this article and can be used in a simple project. Getting a file with the File API One of the first things that we will cover is how to use the File API to get a file from the user's computer. There is a good chance you have interacted with the File API on a web page and have not even realized it yet! The API is usually denoted by the Browse or Choose File text located on an input field in the HTML page and often looks something similar to this: Although the API has been around for quite a while, the one you are probably familiar with is the original specification, dating back as far as 1995. This was the Form-based File Upload in HTML specification that focused on allowing a user to upload a file to a server using an HTML form. Before the days of the file input, application developers had to rely on third-party tools to request files of data from the user. This specification was proposed in order to make a standard way to upload files for a server to download, save, and interact with. The original standard focused entirely on interacting with a file via an HTML form, however, and did not detail any way to interact with a file via JavaScript. This was the origin of the File API. Fast-forward to the groundbreaking days of HTML5 and we now have a fully-fledged File API. The goal of the new specification was to open the doors to file manipulation for web applications, allowing them to interact with files similar to how a native-installed application would. This means providing access to not only a way for the user to upload a file, but also ways to read the file in different formats, manipulate the data of the file, and then ultimately do something with this data. Although there are many great features of the API, we are going to only focus on one small aspect of this API. This is the ability to get binary file data from the user by asking them to upload a file. A typical application that works with files, such as Notepad on Windows, will work with file data in pretty much the same way. It asks the user to open a file in which it will read the binary data from the file and display the characters on the screen. The File API gives us access to the same binary data that any other application would use in the browser. This is the great thing about working with the File API: it works in most browsers from a HTML page; similar to the ones we have been building for our WebRTC demos. To start building our application, we will put together another simple web page. This will look similar to the last ones, and should be hosted with a static file server as done in the previous examples. By the end of the article, you will be a professional single page application builder! Now let's take a look at the following HTML code that demonstrates file sharing: <!DOCTYPE html> <html lang="en"> <head>    <meta charset="utf-8" />      <title>Learning WebRTC - Article: File Sharing</title>      <style>      body {        background-color: #404040;        margin-top: 15px;        font-family: sans-serif;        color: white;      }        .thumb {        height: 75px;        border: 1px solid #000;        margin: 10px 5px 0 0;      }        .page {        position: relative;        display: block;        margin: 0 auto;        width: 500px;        height: 500px;      }        #byte_content {        margin: 5px 0;        max-height: 100px;        overflow-y: auto;        overflow-x: hidden;      }        #byte_range {        margin-top: 5px;      }    </style> </head> <body>    <div id="login-page" class="page">      <h2>Login As</h2>      <input type="text" id="username" />      <button id="login">Login</button>    </div>      <div id="share-page" class="page">      <h2>File Sharing</h2>        <input type="text" id="their-username" />      <button id="connect">Connect</button>      <div id="ready">Ready!</div>        <br />      <br />           <input type="file" id="files" name="file" /> Read bytes:      <button id="send">Send</button>    </div>      <script src="client.js"></script> </body> </html> The page should be fairly recognizable at this point. We will use the same page showing and hiding via CSS as done earlier. One of the main differences is the appearance of the file input, which we will utilize to have the user upload a file to the page. I even picked a different background color this time to spice things up. Setting up our page Create a new folder for our file sharing application and add the HTML code shown in the preceding section. You will also need all the steps from our JavaScript file to log in two users, create a WebRTC peer connection, and create a data channel between them. Copy the following code into your JavaScript file to get the page set up: var name, connectedUser;   var connection = new WebSocket('ws://localhost:8888');   connection.onopen = function () { console.log("Connected"); };   // Handle all messages through this callback connection.onmessage = function (message) { console.log("Got message", message.data);   var data = JSON.parse(message.data);   switch(data.type) {    case "login":      onLogin(data.success);      break;    case "offer":      onOffer(data.offer, data.name);      break;    case "answer":      onAnswer(data.answer);      break;    case "candidate":      onCandidate(data.candidate);      break;    case "leave":      onLeave();      break;    default:      break; } };   connection.onerror = function (err) { console.log("Got error", err); };   // Alias for sending messages in JSON format function send(message) { if (connectedUser) {    message.name = connectedUser; }   connection.send(JSON.stringify(message)); };   var loginPage = document.querySelector('#login-page'), usernameInput = document.querySelector('#username'), loginButton = document.querySelector('#login'), theirUsernameInput = document.querySelector('#their- username'), connectButton = document.querySelector('#connect'), sharePage = document.querySelector('#share-page'), sendButton = document.querySelector('#send'), readyText = document.querySelector('#ready'), statusText = document.querySelector('#status');   sharePage.style.display = "none"; readyText.style.display = "none";   // Login when the user clicks the button loginButton.addEventListener("click", function (event) { name = usernameInput.value;   if (name.length > 0) {    send({      type: "login",      name: name    }); } });   function onLogin(success) { if (success === false) {    alert("Login unsuccessful, please try a different name."); } else {    loginPage.style.display = "none";    sharePage.style.display = "block";      // Get the plumbing ready for a call    startConnection(); } };   var yourConnection, connectedUser, dataChannel, currentFile, currentFileSize, currentFileMeta;   function startConnection() { if (hasRTCPeerConnection()) {    setupPeerConnection(); } else {    alert("Sorry, your browser does not support WebRTC."); } }   function setupPeerConnection() { var configuration = {    "iceServers": [{ "url": "stun:stun.1.google.com:19302 " }] }; yourConnection = new RTCPeerConnection(configuration, {optional: []});   // Setup ice handling yourConnection.onicecandidate = function (event) {    if (event.candidate) {      send({        type: "candidate",       candidate: event.candidate      });    } };   openDataChannel(); }   function openDataChannel() { var dataChannelOptions = {    ordered: true,    reliable: true,    negotiated: true,    id: "myChannel" }; dataChannel = yourConnection.createDataChannel("myLabel", dataChannelOptions);   dataChannel.onerror = function (error) {    console.log("Data Channel Error:", error); };   dataChannel.onmessage = function (event) {    // File receive code will go here };   dataChannel.onopen = function () {    readyText.style.display = "inline-block"; };   dataChannel.onclose = function () {    readyText.style.display = "none"; }; }   function hasUserMedia() { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; return !!navigator.getUserMedia; }   function hasRTCPeerConnection() { window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection; window.RTCSessionDescription = window.RTCSessionDescription || window.webkitRTCSessionDescription || window.mozRTCSessionDescription; window.RTCIceCandidate = window.RTCIceCandidate || window.webkitRTCIceCandidate || window.mozRTCIceCandidate; return !!window.RTCPeerConnection; }   function hasFileApi() { return window.File && window.FileReader && window.FileList && window.Blob; }   connectButton.addEventListener("click", function () { var theirUsername = theirUsernameInput.value;   if (theirUsername.length > 0) {    startPeerConnection(theirUsername); } });   function startPeerConnection(user) { connectedUser = user;   // Begin the offer yourConnection.createOffer(function (offer) {    send({      type: "offer",      offer: offer    });    yourConnection.setLocalDescription(offer); }, function (error) {    alert("An error has occurred."); }); };   function onOffer(offer, name) { connectedUser = name; yourConnection.setRemoteDescription(new RTCSessionDescription(offer));   yourConnection.createAnswer(function (answer) {    yourConnection.setLocalDescription(answer);      send({      type: "answer",      answer: answer    }); }, function (error) {    alert("An error has occurred"); }); };   function onAnswer(answer) { yourConnection.setRemoteDescription(new RTCSessionDescription(answer)); };   function onCandidate(candidate) { yourConnection.addIceCandidate(new RTCIceCandidate(candidate)); };   function onLeave() { connectedUser = null; yourConnection.close(); yourConnection.onicecandidate = null; setupPeerConnection(); }; We set up references to our elements on the screen as well as get the peer connection ready to be processed. When the user decides to log in, we send a login message to the server. The server will return with a success message telling the user they are logged in. From here, we allow the user to connect to another WebRTC user who is given their username. This sends offer and response, connecting the two users together through the peer connection. Once the peer connection is created, we connect the users through a data channel so that we can send arbitrary data across. Hopefully, this is pretty straightforward and you are able to get this code up and running in no time. It should all be familiar to you by now. This is the last time we are going to refer to this code, so get comfortable with it before moving on! Getting a reference to a file Now that we have a simple page up and running, we can start working on the file sharing part of the application. The first thing the user needs to do is select a file from their computer's filesystem. This is easily taken care of already by the input element on the page. The browser will allow the user to select a file from their computer and then save a reference to that file in the browser for later use. When the user presses the Send button, we want to get a reference to the file that the user has selected. To do this, you need to add an event listener, as shown in the following code: sendButton.addEventListener("click", function (event) { var files = document.querySelector('#files').files;   if (files.length > 0) {    dataChannelSend({      type: "start",      data: files[0]    });      sendFile(files[0]); } }); You might be surprised at how simple the code is to get this far! This is the amazing thing about working within a browser. Much of the hard work has already been done for you. Here, we will get a reference to our input element and the files that it has selected. The input element supports both multiple and single selection of files, but in this example we will only work with one file at a time. We then make sure we have a file to work with, tell the other user that we want to start sending data, and then call our sendFile function, which we will implement later in this article. Now, you might think that the object we get back will be in the form of the entire data inside of our file. What we actually get back from the input element is an object representing metadata about the file itself. Let's take a look at this metadata: { lastModified: 1364868324000, lastModifiedDate: "2013-04-02T02:05:24.000Z", name: "example.gif", size: 1745559, type: "image/gif" } This will give us the information we need to tell the other user that we want to start sending a file with the example.gif name. It will also give a few other important details, such as the type of file we are sending and when it has been modified. The next step is to read the file's data and send it through the data channel. This is no easy task, however, and we will require some special logic to do so. Summary In this article we covered the basics of using the File API and retrieving a file from a user's computer. The article also discusses the page setup for the application using JavaScript and getting a reference to a file. Resources for Article: Further resources on this subject: WebRTC with SIP and IMS [article] Using the WebRTC Data API [article] Applications of WebRTC [article]
Read more
  • 0
  • 0
  • 2062

article-image-deployment-preparations
Packt
08 Jul 2015
23 min read
Save for later

Deployment Preparations

Packt
08 Jul 2015
23 min read
In this article by Jurie-Jan Botha, author of the book Grunt Cookbook, has covered the following recipes: Minifying HTML Minifying CSS Optimizing images Linting JavaScript code Uglifying JavaScript code Setting up RequireJS (For more resources related to this topic, see here.) Once our web application is built and its stability ensured, we can start preparing it for deployment to its intended market. This will mainly involve the optimization of the assets that make up the application. Optimization in this context mostly refers to compression of one kind or another, some of which might lead to performance increases too. The focus on compression is primarily due to the fact that the smaller the asset, the faster it can be transferred from where it is hosted to a user's web browser. This leads to a much better user experience, and can sometimes be essential to the functioning of an application. Minifying HTML In this recipe, we make use of the contrib-htmlmin (0.3.0) plugin to decrease the size of some HTML documents by minifying them. Getting ready In this example, we'll work with the a basic project structure. How to do it... The following steps take us through creating a sample HTML document and configuring a task that minifies it: We'll start by installing the package that contains the contrib-htmlmin plugin. Next, we'll create a simple HTML document called index.html in the src directory, which we'd like to minify, and add the following content in it: <html> <head>    <title>Test Page</title> </head> <body>    <!-- This is a comment! -->    <h1>This is a test page.</h1> </body> </html> Now, we'll add the following htmlmin task to our configuration, which indicates that we'd like to have the white space and comments removed from the src/index.html file, and that we'd like the result to be saved in the dist/index.html file: htmlmin: { dist: {    src: 'src/index.html',    dest: 'dist/index.html',    options: {      removeComments: true,      collapseWhitespace: true    } } } The removeComments and collapseWhitespace options are used as examples here, as using the default htmlmin task will have no effect. Other minification options can be found at the following URL: https://github.com/kangax/html-minifier#options-quick-reference We can now run the task using the grunt htmlmin command, which should produce output similar to the following: Running "htmlmin:dist" (htmlmin) task Minified dist/index.html 147 B ? 92 B If we now take a look at the dist/index.html file, we will see that all white space and comments have been removed: <html> <head>    <title>Test Page</title> </head> <body>    <h1>This is a test page.</h1> </body> </html> Minifying CSS In this recipe, we'll make use of the contrib-cssmin (0.10.0) plugin to decrease the size of some CSS documents by minifying them. Getting ready In this example, we'll work with a basic project structure. How to do it... The following steps take us through creating a sample CSS document and configuring a task that minifies it. We'll start by installing the package that contains the contrib-cssmin plugin. Then, we'll create a simple CSS document called style.css in the src directory, which we'd like to minify, and provide it with the following contents: body { /* Average body style */ background-color: #ffffff; color: #000000; /*! Black (Special) */ } Now, we'll add the following cssmin task to our configuration, which indicates that we'd like to have the src/style.css file compressed, and have the result saved to the dist/style.min.css file: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css' } } We can now run the task using the grunt cssmin command, which should produce the following output: Running "cssmin:dist" (cssmin) taskFile dist/style.css created: 55 B ? 38 B If we take a look at the dist/style.min.css file that was produced, we will see that it has the compressed contents of the original src/style.css file: body{background-color:#fff;color:#000;/*! Black (Special) */} There's more... The cssmin task provides us with several useful options that can be used in conjunction with its basic compression feature. We'll look at prefixing a banner, removing special comments, and reporting gzipped results. Prefixing a banner In the case that we'd like to automatically include some information about the compressed result in the resulting CSS file, we can do so in a banner. A banner can be prepended to the result by supplying the desired banner content to the banner option, as shown in the following example: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      banner: '/* Minified version of style.css */'    } } } Removing special comments Comments that should not be removed by the minification process are called special comments and can be indicated using the "/*! comment */" markers. By default, the cssmin task will leave all special comments untouched, but we can alter this behavior by making use of the keepSpecialComments option. The keepSpecialComments option can be set to either the *, 1, or 0 value. The * value is the default and indicates that all special comments should be kept, 1 indicates that only the first comment that is found should be kept, and 0 indicates that none of them should be kept. The following configuration will ensure that all comments are removed from our minified result: cssmin: { dist: {    src: 'src/style.css',    dest: 'dist/style.min.css',    options: {      keepSpecialComments: 0    } } } Reporting on gzipped results Reporting is useful to see exactly how well the cssmin task has compressed our CSS files. By default, the size of the targeted file and minified result will be displayed, but if we'd also like to see the gzipped size of the result, we can set the report option to gzip, as shown in the following example: cssmin: { dist: {    src: 'src/main.css',    dest: 'dist/main.css',    options: {      report: 'gzip'    } } } Optimizing images In this recipe, we'll make use of the contrib-imagemin (0.9.4) plugin to decrease the size of images by compressing them as much as possible without compromising on their quality. This plugin also provides a plugin framework of its own, which is discussed at the end of this recipe. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through configuring a task that will compress an image for our project. We'll start by installing the package that contains the contrib-imagemin plugin. Next, we can ensure that we have an image called image.jpg in the src directory on which we'd like to perform optimizations. Now, we'll add the following imagemin task to our configuration and indicate that we'd like to have the src/image.jpg file optimized, and have the result saved to the dist/image.jpg file: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg' } } We can then run the task using the grunt imagemin command, which should produce the following output: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 13.36 kB) If we now take a look at the dist/image.jpg file, we will see that its size has decreased without any impact on the quality. There's more... The imagemin task provides us with several options that allow us to tweak its optimization features. We'll look at how to adjust the PNG compression level, disable the progressive JPEG generation, disable the interlaced GIF generation, specify SVGO plugins to be used, and use the imagemin plugin framework. Adjusting the PNG compression level The compression of a PNG image can be increased by running the compression algorithm on it multiple times. By default, the compression algorithm is run 16 times. This number can be changed by providing a number from 0 to 7 to the optimizationLevel option. The 0 value means that the compression is effectively disabled and 7 indicates that the algorithm should run 240 times. In the following configuration we set the compression level to its maximum: imagemin: { dist: {    src: 'src/image.png',    dest: 'dist/image.png',    options: {      optimizationLevel: 7    } } } Disabling the progressive JPEG generation Progressive JPEGs are compressed in multiple passes, which allows a low-quality version of them to quickly become visible and increase in quality as the rest of the image is received. This is especially helpful when displaying images over a slower connection. By default, the imagemin plugin will generate JPEG images in the progressive format, but this behavior can be disabled by setting the progressive option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      progressive: false    } } } Disabling the interlaced GIF generation An interlaced GIF is the equivalent of a progressive JPEG in that it allows the contained image to be displayed at a lower resolution before it has been fully downloaded, and increases in quality as the rest of the image is received. By default, the imagemin plugin will generate GIF images in the interlaced format, but this behavior can be disabled by setting the interlaced option to false, as shown in the following example: imagemin: { dist: {    src: 'src/image.gif',    dest: 'dist/image.gif',    options: {      interlaced: false    } } } Specifying SVGO plugins to be used When optimizing SVG images, the SVGO library is used by default. This allows us to specify the use of various plugins provided by the SVGO library that each performs a specific function on the targeted files. Refer to the following URL for more detailed instructions on how to use the svgo plugins options and the SVGO library: https://github.com/sindresorhus/grunt-svgmin#available-optionsplugins Most of the plugins in the library are enabled by default, but if we'd like to specifically indicate which of these should be used, we can do so using the svgoPlugins option. Here, we can provide an array of objects, where each contain a property with the name of the plugin to be affected, followed by a true or false value to indicate whether it should be activated. The following configuration disables three of the default plugins: imagemin: { dist: {    src: 'src/image.svg',    dest: 'dist/image.svg',    options: {      svgoPlugins: [        {removeViewBox:false},        {removeUselessStrokeAndFill:false},        {removeEmptyAttrs:false}      ]    } } } Using the 'imagemin' plugin framework In order to provide support for the various image optimization projects, the imagemin plugin has a plugin framework of its own that allows developers to easily create an extension that makes use of the tool they require. You can get a list of the available plugin modules for the imagemin plugin's framework at the following URL: https://www.npmjs.com/browse/keyword/imageminplugin The following steps will take us through installing and making use of the mozjpeg plugin to compress an image in our project. These steps start where the main recipe takes off. We'll start by installing the imagemin-mozjpeg package using the npm install imagemin-mozjpeg command, which should produce the following output: imagemin-mozjpeg@4.0.0 node_modules/imagemin-mozjpeg With the package installed, we need to import it into our configuration file, so that we can make use of it in our task configuration. We do this by adding the following line at the top of our Gruntfile.js file: var mozjpeg = require('imagemin-mozjpeg'); With the plugin installed and imported, we can now change the configuration of our imagemin task by adding the use option and providing it with the initialized plugin: imagemin: { dist: {    src: 'src/image.jpg',    dest: 'dist/image.jpg',    options: {      use: [mozjpeg()]    } } } Finally, we can test our setup by running the task using the grunt imagemin command. This should produce an output similar to the following: Running "imagemin:dist" (imagemin) task Minified 1 image (saved 9.88 kB) Linting JavaScript code In this recipe, we'll make use of the contrib-jshint (0.11.1) plugin to detect errors and potential problems in our JavaScript code. It is also commonly used to enforce code conventions within a team or project. As can be derived from its name, it's basically a Grunt adaptation for the JSHint tool. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will scan and analyze it using the JSHint tool. We'll start by installing the package that contains the contrib-jshint plugin. Next, we'll create a sample JavaScript file called main.js in the src directory, and add the following content in it: sample = 'abc'; console.log(sample); With our sample file ready, we can now add the following jshint task to our configuration. We'll configure this task to target the sample file and also add a basic option that we require for this example: jshint: { main: {    options: {      undef: true    },    src: ['src/main.js'] } } The undef option is a standard JSHint option used specifically for this example and is not required for this plugin to function. Specifying this option indicates that we'd like to have errors raised for variables that are used without being explicitly defined. We can now run the task using the grunt jshint command, which should produce output informing us of the problems found in our sample file: Running "jshint:main" (jshint) task      src/main.js      1 |sample = 'abc';          ^ 'sample' is not defined.      2 |console.log(sample);          ^ 'console' is not defined.      2 |console.log(sample);                      ^ 'sample' is not defined.   >> 3 errors in 1 file There's more... The jshint task provides us with several options that allow us to change its general behavior, in addition to how it analyzes the targeted code. We'll look at how to specify standard JSHint options, specify globally defined variables, send reported output to a file, and prevent task failure on JSHint errors. Specifying standard JSHint options The contrib-jshint plugin provides a simple way to pass all the standard JSHint options from the task's options object to the underlying JSHint tool. A list of all the options provided by the JSHint tool can be found at the following URL: http://jshint.com/docs/options/ The following example adds the curly option to the task we created in our main recipe to enforce the use of curly braces wherever they are appropriate: jshint: { main: {    options: {      undef: true,      curly: true    },    src: ['src/main.js'] } } Specifying globally defined variables Making use of globally defined variables is quite common when working with JavaScript, which is where the globals option comes in handy. Using this option, we can define a set of global values that we'll use in the targeted code, so that errors aren't raised when JSHint encounters them. In the following example, we indicate that the console variable should be treated as a global, and not raise errors when encountered: jshint: { main: {    options: {      undef: true,      globals: {        console: true      }    },    src: ['src/main.js'] } } Sending reported output to a file If we'd like to store the resulting output from our JSHint analysis, we can do so by specifying a path to a file that should receive it using the reporterOutput option, as shown in the following example: jshint: { main: {    options: {      undef: true,      reporterOutput: 'report.dat'    },    src: ['src/main.js'] } } Preventing task failure on JSHint errors The default behavior for the jshint task is to exit the running Grunt process once a JSHint error is encountered in any of the targeted files. This behavior becomes especially undesirable if you'd like to keep watching files for changes, even when an error has been raised. In the following example, we indicate that we'd like to keep the process running when errors are encountered by giving the force option a true value: jshint: { main: {    options: {      undef: true,      force: true    },    src: ['src/main.js'] } } Uglifying JavaScript Code In this recipe, we'll make use of the contrib-uglify (0.8.0) plugin to compress and mangle some files containing JavaScript code. For the most part, the process of uglifying just removes all the unnecessary characters and shortens variable names in a source code file. This has the potential to dramatically reduce the size of the file, slightly increase performance, and make the inner workings of your publicly available code a little more obscure. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating a sample JavaScript file and configuring a task that will uglify it. We'll start by installing the package that contains the contrib-uglify plugin. Then, we can create a sample JavaScript file called main.js in the src directory, which we'd like to uglify, and provide it with the following contents: var main = function () { var one = 'Hello' + ' '; var two = 'World';   var result = one + two;   console.log(result); }; With our sample file ready, we can now add the following uglify task to our configuration, indicating the sample file as the target and providing a destination output file: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js' } } We can now run the task using the grunt uglify command, which should produce output similar to the following: Running "uglify:main" (uglify) task >> 1 file created. If we now take a look at the resulting dist/main.js file, we should see that it contains the uglified contents of the original src/main.js file. There's more... The uglify task provides us with several options that allow us to change its general behavior and see how it uglifies the targeted code. We'll look at specifying standard UglifyJS options, generating source maps, and wrapping generated code in an enclosure. Specifying standard UglifyJS options The underlying UglifyJS tool can provide a set of options for each of its separate functional parts. These parts are the mangler, compressor, and beautifier. The contrib-plugin allows passing options to each of these parts using the mangle, compress, and beautify options. The available options for each of the mangler, compressor, and beautifier parts can be found at each of following URLs (listed in the order mentioned): https://github.com/mishoo/UglifyJS2#mangler-options https://github.com/mishoo/UglifyJS2#compressor-options https://github.com/mishoo/UglifyJS2#beautifier-options The following example alters the configuration of the main recipe to provide a single option to each of these parts: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      mangle: {        toplevel: true      },      compress: {        evaluate: false      },      beautify: {        semicolons: false      }    } } } Generating source maps As code gets mangled and compressed, it becomes effectively unreadable to humans, and therefore, nearly impossible to debug. For this reason, we are provided with the option of generating a source map when uglifying our code. The following example makes use of the sourceMap option to indicate that we'd like to have a source map generated along with our uglified code: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      sourceMap: true    } } } Running the altered task will now, in addition to the dist/main.js file with our uglified source, generate a source map file called main.js.map in the same directory as the uglified file. Wrapping generated code in an enclosure When building your own JavaScript code modules, it's usually a good idea to have them wrapped in a wrapper function to ensure that you don't pollute the global scope with variables that you won't be using outside of the module itself. For this purpose, we can use the wrap option to indicate that we'd like to have the resulting uglified code wrapped in a wrapper function, as shown in the following example: uglify: { main: {    src: 'src/main.js',    dest: 'dist/main.js',    options: {      wrap: true    } } } If we now take a look at the result dist/main.js file, we should see that all the uglified contents of the original file are now contained within a wrapper function. Setting up RequireJS In this recipe, we'll make use of the contrib-requirejs (0.4.4) plugin to package the modularized source code of our web application into a single file. For the most part, this plugin just provides a wrapper for the RequireJS tool. RequireJS provides a framework to modularize JavaScript source code and consume those modules in an orderly fashion. It also allows packaging an entire application into one file and importing only the modules that are required while keeping the module structure intact. Getting ready In this example, we'll work with the basic project structure. How to do it... The following steps take us through creating some files for a sample application and setting up a task that bundles them into one file. We'll start by installing the package that contains the contrib-requirejs plugin. First, we'll need a file that will contain our RequireJS configuration. Let's create a file called config.js in the src directory and add the following content in it: require.config({ baseUrl: 'app' }); Secondly, we'll create a sample module that we'd like to use in our application. Let's create a file called sample.js in the src/app directory and add the following content in it: define(function (require) { return function () {    console.log('Sample Module'); } }); Lastly, we'll need a file that will contain the main entry point for our application, and also makes use of our sample module. Let's create a file called main.js in the src/app directory and add the following content in it: require(['sample'], function (sample) { sample(); }); Now that we've got all the necessary files required for our sample application, we can setup a requirejs task that will bundle it all into one file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js'    } } } The mainConfigFile option points out the configuration file that will determine the behavior of RequireJS. The name option indicates the name of the module that contains the application entry point. In the case of this example, our application entry point is contained in the app/main.js file, and app is the base directory of our application in the src/config.js file. This translates the app/main.js filename into the main module name. The out option is used to indicate the file that should receive the result of the bundled application. We can now run the task using the grunt requirejs command, which should produce output similar to the following: Running "requirejs:app" (requirejs) task We should now have a file named app.js in the www/js directory that contains our entire sample application. There's more... The requirejs task provides us with all the underlying options provided by the RequireJS tool. We'll look at how to use these exposed options and generate a source map. Using RequireJS optimizer options The RequireJS optimizer is quite an intricate tool, and therefore, provides a large number of options to tweak its behavior. The contrib-requirejs plugin allows us to easily set any of these options by just specifying them as options of the plugin itself. A list of all the available configuration options for the RequireJS build system can be found in the example configuration file at the following URL: https://github.com/jrburke/r.js/blob/master/build/example.build.js The following example indicates that the UglifyJS2 optimizer should be used instead of the default UglifyJS optimizer by using the optimize option: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2'    } } } Generating a source map When the source code is bundled into one file, it becomes somewhat harder to debug, as you now have to trawl through miles of code to get to the point you're actually interested in. A source map can help us with this issue by relating the resulting bundled file to the modularized structure it is derived from. Simply put, with a source map, our debugger will display the separate files we had before, even though we're actually using the bundled file. The following example makes use of the generateSourceMap option to indicate that we'd like to generate a source map along with the resulting file: requirejs: { app: {    options: {      mainConfigFile: 'src/config.js',      name: 'main',      out: 'www/js/app.js',      optimize: 'uglify2',      preserveLicenseComments: false,      generateSourceMaps: true    } } } In order to use the generateSourceMap option, we have to indicate that UglifyJS2 is to be used for optimization, by setting the optimize option to uglify2, and that license comments should not be preserved, by setting the preserveLicenseComments option to false. Summary This article covers the optimization of images, minifying of CSS, ensuring the quality of our JavaScript code, compressing it, and packaging it all together into one source file. Resources for Article: Further resources on this subject: Grunt in Action [article] So, what is Node.js? [article] Exploring streams [article]
Read more
  • 0
  • 0
  • 1543

article-image-man-do-i-templates
Packt
07 Jul 2015
22 min read
Save for later

Man, Do I Like Templates!

Packt
07 Jul 2015
22 min read
In this article by Italo Maia, author of the book Building Web Applications with Flask, we will discuss what Jinja2 is, and how Flask uses Jinja2 to implement the View layer and awe you. Be prepared! (For more resources related to this topic, see here.) What is Jinja2 and how is it coupled with Flask? Jinja2 is a library found at http://jinja.pocoo.org/; you can use it to produce formatted text with bundled logic. Unlike the Python format function, which only allows you to replace markup with variable content, you can have a control structure, such as a for loop, inside a template string and use Jinja2 to parse it. Let's consider this example: from jinja2 import Template x = """ <p>Uncle Scrooge nephews</p> <ul> {% for i in my_list %} <li>{{ i }}</li> {% endfor %} </ul> """ template = Template(x) # output is an unicode string print template.render(my_list=['Huey', 'Dewey', 'Louie']) In the preceding code, we have a very simple example where we create a template string with a for loop control structure ("for tag", for short) that iterates over a list variable called my_list and prints the element inside a "li HTML tag" using curly braces {{ }} notation. Notice that you could call render in the template instance as many times as needed with different key-value arguments, also called the template context. A context variable may have any valid Python variable name—that is, anything in the format given by the regular expression [a-zA-Z_][a-zA-Z0-9_]*. For a full overview on regular expressions (Regex for short) with Python, visit https://docs.python.org/2/library/re.html. Also, take a look at this nice online tool for Regex testing http://pythex.org/. A more elaborate example would make use of an environment class instance, which is a central, configurable, extensible class that may be used to load templates in a more organized way. Do you follow where we are going here? This is the basic principle behind Jinja2 and Flask: it prepares an environment for you, with a few responsive defaults, and gets your wheels in motion. What can you do with Jinja2? Jinja2 is pretty slick. You can use it with template files or strings; you can use it to create formatted text, such as HTML, XML, Markdown, and e-mail content; you can put together templates, reuse templates, and extend templates; you can even use extensions with it. The possibilities are countless, and combined with nice debugging features, auto-escaping, and full unicode support. Auto-escaping is a Jinja2 configuration where everything you print in a template is interpreted as plain text, if not explicitly requested otherwise. Imagine a variable x has its value set to <b>b</b>. If auto-escaping is enabled, {{ x }} in a template would print the string as given. If auto-escaping is off, which is the Jinja2 default (Flask's default is on), the resulting text would be b. Let's understand a few concepts before covering how Jinja2 allows us to do our coding. First, we have the previously mentioned curly braces. Double curly braces are a delimiter that allows you to evaluate a variable or function from the provided context and print it into the template: from jinja2 import Template # create the template t = Template("{{ variable }}") # – Built-in Types – t.render(variable='hello you') >> u"hello you" t.render(variable=100) >> u"100" # you can evaluate custom classes instances class A(object): def __str__(self):    return "__str__" def __unicode__(self):    return u"__unicode__" def __repr__(self):    return u"__repr__" # – Custom Objects Evaluation – # __unicode__ has the highest precedence in evaluation # followed by __str__ and __repr__ t.render(variable=A()) >> u"__unicode__" In the preceding example, we see how to use curly braces to evaluate variables in your template. First, we evaluate a string and then an integer. Both result in a unicode string. If we evaluate a class of our own, we must make sure there is a __unicode__ method defined, as it is called during the evaluation. If a __unicode__ method is not defined, the evaluation falls back to __str__ and __repr__, sequentially. This is easy. Furthermore, what if we want to evaluate a function? Well, just call it: from jinja2 import Template # create the template t = Template("{{ fnc() }}") t.render(fnc=lambda: 10) >> u"10" # evaluating a function with argument t = Template("{{ fnc(x) }}") t.render(fnc=lambda v: v, x='20') >> u"20" t = Template("{{ fnc(v=30) }}") t.render(fnc=lambda v: v) >> u"30" To output the result of a function in a template, just call the function as any regular Python function. The function return value will be evaluated normally. If you're familiar with Django, you might notice a slight difference here. In Django, you do not need the parentheses to call a function, or even pass arguments to it. In Flask, the parentheses are always needed if you want the function return evaluated. The following two examples show the difference between Jinja2 and Django function call in a template: {# flask syntax #} {{ some_function() }}   {# django syntax #} {{ some_function }} You can also evaluate Python math operations. Take a look: from jinja2 import Template # no context provided / needed Template("{{ 3 + 3 }}").render() >> u"6" Template("{{ 3 - 3 }}").render() >> u"0" Template("{{ 3 * 3 }}").render() >> u"9" Template("{{ 3 / 3 }}").render() >> u"1" Other math operators will also work. You may use the curly braces delimiter to access and evaluate lists and dictionaries: from jinja2 import Template Template("{{ my_list[0] }}").render(my_list=[1, 2, 3]) >> u'1' Template("{{ my_list['foo'] }}").render(my_list={'foo': 'bar'}) >> u'bar' # and here's some magic Template("{{ my_list.foo }}").render(my_list={'foo': 'bar'}) >> u'bar' To access a list or dictionary value, just use normal plain Python notation. With dictionaries, you can also access a key value using variable access notation, which is pretty neat. Besides the curly braces delimiter, Jinja2 also has the curly braces/percentage delimiter, which uses the notation {% stmt %} and is used to execute statements, which may be a control statement or not. Its usage depends on the statement, where control statements have the following notation: {% stmt %} {% endstmt %} The first tag has the statement name, while the second is the closing tag, which has the name of the statement appended with end in the beginning. You must be aware that a non-control statement may not have a closing tag. Let's look at some examples: {% block content %} {% for i in items %} {{ i }} - {{ i.price }} {% endfor %} {% endblock %} The preceding example is a little more complex than what we have been seeing. It uses a control statement for loop inside a block statement (you can have a statement inside another), which is not a control statement, as it does not control execution flow in the template. Inside the for loop you see that the i variable is being printed together with the associated price (defined elsewhere). A last delimiter you should know is {# comments go here #}. It is a multi-line delimiter used to declare comments. Let's see two examples that have the same result: {# first example #} {# second example #} Both comment delimiters hide the content between {# and #}. As can been seen, this delimiter works for one-line comments and multi-line comments, what makes it very convenient. Control structures We have a nice set of built-in control structures defined by default in Jinja2. Let's begin our studies on it with the if statement. {% if true %}Too easy{% endif %} {% if true == true == True %}True and true are the same{% endif %} {% if false == false == False %}False and false also are the same{% endif %} {% if none == none == None %}There's also a lowercase None{% endif %} {% if 1 >= 1 %}Compare objects like in plain python{% endif %} {% if 1 == 2 %}This won't be printed{% else %}This will{% endif %} {% if "apples" != "oranges" %}All comparison operators work = ]{% endif %} {% if something %}elif is also supported{% elif something_else %}^_^{% endif %} The if control statement is beautiful! It behaves just like a python if statement. As seen in the preceding code, you can use it to compare objects in a very easy fashion. "else" and "elif" are also fully supported. You may also have noticed that true and false, non-capitalized, were used together with plain Python Booleans, True and False. As a design decision to avoid confusion, all Jinja2 templates have a lowercase alias for True, False, and None. By the way, lowercase syntax is the preferred way to go. If needed, and you should avoid this scenario, you may group comparisons together in order to change precedence evaluation. See the following example: {% if 5 < 10 < 15 %}true{%else%}false{% endif %} {% if (5 < 10) < 15 %}true{%else%}false{% endif %} {% if 5 < (10 < 15) %}true{%else%}false{% endif %} The expected output for the preceding example is true, true, and false. The first two lines are pretty straightforward. In the third line, first, (10<15) is evaluated to True, which is a subclass of int, where True == 1. Then 5 < True is evaluated, which is certainly false. The for statement is pretty important. One can hardly think of a serious Web application that does not have to show a list of some kind at some point. The for statement can iterate over any iterable instance and has a very simple, Python-like syntax: {% for item in my_list %} {{ item }}{# print evaluate item #} {% endfor %} {# or #} {% for key, value in my_dictionary.items() %} {{ key }}: {{ value }} {% endfor %} In the first statement, we have the opening tag indicating that we will iterate over my_list items and each item will be referenced by the name item. The name item will be available inside the for loop context only. In the second statement, we have an iteration over the key value tuples that form my_dictionary, which should be a dictionary (if the variable name wasn't suggestive enough). Pretty simple, right? The for loop also has a few tricks in store for you. When building HTML lists, it's a common requirement to mark each list item in alternating colors in order to improve readability or mark the first or/and last item with some special markup. Those behaviors can be achieved in a Jinja2 for-loop through access to a loop variable available inside the block context. Let's see some examples: {% for i in ['a', 'b', 'c', 'd'] %} {% if loop.first %}This is the first iteration{% endif %} {% if loop.last %}This is the last iteration{% endif %} {{ loop.cycle('red', 'blue') }}{# print red or blue alternating #} {{ loop.index }} - {{ loop.index0 }} {# 1 indexed index – 0 indexed index #} {# reverse 1 indexed index – reverse 0 indexed index #} {{ loop.revindex }} - {{ loop.revindex0 }} {% endfor %} The for loop statement, as in Python, also allow the use of else, but with a slightly different meaning. In Python, when you use else with for, the else block is only executed if it was not reached through a break command like this: for i in [1, 2, 3]: pass else: print "this will be printed" for i in [1, 2, 3]: if i == 3:    break else: print "this will never not be printed" As seen in the preceding code snippet, the else block will only be executed in a for loop if the execution was never broken by a break command. With Jinja2, the else block is executed when the for iterable is empty. For example: {% for i in [] %} {{ i }} {% else %}I'll be printed{% endfor %} {% for i in ['a'] %} {{ i }} {% else %}I won't{% endfor %} As we are talking about loops and breaks, there are two important things to know: the Jinja2 for loop does not support break or continue. Instead, to achieve the expected behavior, you should use loop filtering as follows: {% for i in [1, 2, 3, 4, 5] if i > 2 %} value: {{ i }}; loop.index: {{ loop.index }} {%- endfor %} In the first tag you see a normal for loop together with an if condition. You should consider that condition as a real list filter, as the index itself is only counted per iteration. Run the preceding example and the output will be the following: value:3; index: 1 value:4; index: 2 value:5; index: 3 Look at the last observation in the preceding example—in the second tag, do you see the dash in {%-? It tells the renderer that there should be no empty new lines before the tag at each iteration. Try our previous example without the dash and compare the results to see what changes. We'll now look at three very important statements used to build templates from different files: block, extends, and include. block and extends always work together. The first is used to define "overwritable" blocks in a template, while the second defines a parent template that has blocks, for the current template. Let's see an example: # coding:utf-8 with open('parent.txt', 'w') as file:    file.write(""" {% block template %}parent.txt{% endblock %} =========== I am a powerful psychic and will tell you your past   {#- "past" is the block identifier #} {% block past %} You had pimples by the age of 12. {%- endblock %}   Tremble before my power!!!""".strip())   with open('child.txt', 'w') as file:    file.write(""" {% extends "parent.txt" %}   {# overwriting the block called template from parent.txt #} {% block template %}child.txt{% endblock %}   {#- overwriting the block called past from parent.txt #} {% block past %} You've bought an ebook recently. {%- endblock %}""".strip()) with open('other.txt', 'w') as file:    file.write(""" {% extends "child.txt" %} {% block template %}other.txt{% endblock %}""".strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') # look up our template tmpl = env.get_template('parent.txt') # render it to default output print tmpl.render() print "" # loads child.html and its parent tmpl = env.get_template('child.txt') print tmpl.render() # loads other.html and its parent env.get_template('other.txt').render() Do you see the inheritance happening, between child.txt and parent.txt? parent.txt is a simple template with two block statements, called template and past. When you render parent.txt directly, its blocks are printed "as is", because they were not overwritten. In child.txt, we extend the parent.txt template and overwrite all its blocks. By doing that, we can have different information in specific parts of a template without having to rewrite the whole thing. With other.txt, for example, we extend the child.txt template and overwrite only the block-named template. You can overwrite blocks from a direct parent template or from any of its parents. If you were defining an index.txt page, you could have default blocks in it that would be overwritten when needed, saving lots of typing. Explaining the last example, Python-wise, is pretty simple. First, we create a Jinja2 environment (we talked about this earlier) and tell it how to load our templates, then we load the desired template directly. We do not have to bother telling the environment how to find parent templates, nor do we need to preload them. The include statement is probably the easiest statement so far. It allows you to render a template inside another in a very easy fashion. Let's look at an example: with open('base.txt', 'w') as file: file.write(""" {{ myvar }} You wanna hear a dirty joke? {% include 'joke.txt' %} """.strip()) with open('joke.txt', 'w') as file: file.write(""" A boy fell in a mud puddle. {{ myvar }} """.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') print env.get_template('base.txt').render(myvar='Ha ha!') In the preceding example, we render the joke.txt template inside base.txt. As joke.txt is rendered inside base.txt, it also has full access to the base.txt context, so myvar is printed normally. Finally, we have the set statement. It allows you to define variables for inside the template context. Its use is pretty simple: {% set x = 10 %} {{ x }} {% set x, y, z = 10, 5+5, "home" %} {{ x }} - {{ y }} - {{ z }} In the preceding example, if x was given by a complex calculation or a database query, it would make much more sense to have it cached in a variable, if it is to be reused across the template. As seen in the example, you can also assign a value to multiple variables at once. Macros Macros are the closest to coding you'll get inside Jinja2 templates. The macro definition and usage are similar to plain Python functions, so it is pretty easy. Let's try an example: with open('formfield.html', 'w') as file: file.write(''' {% macro input(name, value='', label='') %} {% if label %} <label for='{{ name }}'>{{ label }}</label> {% endif %} <input id='{{ name }}' name='{{ name }}' value='{{ value }}'></input> {% endmacro %}'''.strip()) with open('index.html', 'w') as file: file.write(''' {% from 'formfield.html' import input %} <form method='get' action='.'> {{ input('name', label='Name:') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() In the preceding example, we create a macro that accepts a name argument and two optional arguments: value and label. Inside the macro block, we define what should be output. Notice we can use other statements inside a macro, just like a template. In index.html we import the input macro from inside formfield.html, as if formfield was a module and input was a Python function using the import statement. If needed, we could even rename our input macro like this: {% from 'formfield.html' import input as field_input %} You can also import formfield as a module and use it as follows: {% import 'formfield.html' as formfield %} When using macros, there is a special case where you want to allow any named argument to be passed into the macro, as you would in a Python function (for example, **kwargs). With Jinja2 macros, these values are, by default, available in a kwargs dictionary that does not need to be explicitly defined in the macro signature. For example: # coding:utf-8 with open('formfield.html', 'w') as file:    file.write(''' {% macro input(name) -%} <input id='{{ name }}' name='{{ name }}' {% for k,v in kwargs.items() -%}{{ k }}='{{ v }}' {% endfor %}></input> {%- endmacro %} '''.strip())with open('index.html', 'w') as file:    file.write(''' {% from 'formfield.html' import input %} {# use method='post' whenever sending sensitive data over HTTP #} <form method='post' action='.'> {{ input('name', type='text') }} {{ input('passwd', type='password') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() As you can see, kwargs is available even though you did not define a kwargs argument in the macro signature. Macros have a few clear advantages over plain templates, that you notice with the include statement: You do not have to worry about variable names in the template using macros You can define the exact required context for a macro block through the macro signature You can define a macro library inside a template and import only what is needed Commonly used macros in a Web application include a macro to render pagination, another to render fields, and another to render forms. You could have others, but these are pretty common use cases. Regarding our previous example, it is good practice to use HTTPS (also known as, Secure HTTP) to send sensitive information, such as passwords, over the Internet. Be careful about that! Extensions Extensions are the way Jinja2 allows you to extend its vocabulary. Extensions are not enabled by default, so you can enable an extension only when and if you need, and start using it without much trouble: env = Environment(extensions=['jinja2.ext.do',   'jinja2.ext.with_']) In the preceding code, we have an example where you create an environment with two extensions enabled: do and with. Those are the extensions we will study in this article. As the name suggests, the do extension allows you to "do stuff". Inside a do tag, you're allowed to execute Python expressions with full access to the template context. Flask-Empty, a popular flask boilerplate available at https://github.com/italomaia/flask-empty uses the do extension to update a dictionary in one of its macros, for example. Let's see how we could do the same: {% set x = {1:'home', '2':'boat'} %} {% do x.update({3: 'bar'}) %} {%- for key,value in x.items() %} {{ key }} - {{ value }} {%- endfor %} In the preceding example, we create the x variable with a dictionary, then we update it with {3: 'bar'}. You don't usually need to use the do extension but, when you do, a lot of coding is saved. The with extension is also very simple. You use it whenever you need to create block scoped variables. Imagine you have a value you need cached in a variable for a brief moment; this would be a good use case. Let's see an example: {% with age = user.get_age() %} My age: {{ age }} {% endwith %} My age: {{ age }}{# no value here #} As seen in the example, age exists only inside the with block. Also, variables set inside a with block will only exist inside it. For example: {% with %} {% set count = query.count() %} Current Stock: {{ count }} Diff: {{ prev_count - count }} {% endwith %} {{ count }} {# empty value #} Filters Filters are a marvelous thing about Jinja2! This tool allows you to process a constant or variable before printing it to the template. The goal is to implement the formatting you want, strictly in the template. To use a filter, just call it using the pipe operator like this: {% set name = 'junior' %} {{ name|capitalize }} {# output is Junior #} Its name is passed to the capitalize filter that processes it and returns the capitalized value. To inform arguments to the filter, just call it like a function, like this: {{ ['Adam', 'West']|join(' ') }} {# output is Adam West #} The join filter will join all values from the passed iterable, putting the provided argument between them. Jinja2 has an enormous quantity of available filters by default. That means we can't cover them all here, but we can certainly cover a few. capitalize and lower were seen already. Let's look at some further examples: {# prints default value if input is undefined #} {{ x|default('no opinion') }} {# prints default value if input evaluates to false #} {{ none|default('no opinion', true) }} {# prints input as it was provided #} {{ 'some opinion'|default('no opinion') }}   {# you can use a filter inside a control statement #} {# sort by key case-insensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort %}{{ key }}{% endfor %} {# sort by key case-sensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(true) %}{{ key }}{% endfor %} {# sort by value #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(false, 'value') %}{{ key }}{% endfor %} {{ [3, 2, 1]|first }} - {{ [3, 2, 1]|last }} {{ [3, 2, 1]|length }} {# prints input length #} {# same as in python #} {{ '%s, =D'|format("I'm John") }} {{ "He has two daughters"|replace('two', 'three') }} {# safe prints the input without escaping it first#} {{ '<input name="stuff" />'|safe }} {{ "there are five words here"|wordcount }} Try the preceding example to see exactly what each filter does. After reading this much about Jinja2, you're probably thinking: "Jinja2 is cool but this is a book about Flask. Show me the Flask stuff!". Ok, ok, I can do that! Of what we have seen so far, almost everything can be used with Flask with no modifications. As Flask manages the Jinja2 environment for you, you don't have to worry about creating file loaders and stuff like that. One thing you should be aware of, though, is that, because you don't instantiate the Jinja2 environment yourself, you can't really pass to the class constructor, the extensions you want to activate. To activate an extension, add it to Flask during the application setup as follows: from flask import Flask app = Flask(__name__) app.jinja_env.add_extension('jinja2.ext.do') # or jinja2.ext.with_ if __name__ == '__main__': app.run() Messing with the template context You can use the render_template method to load a template from the templates folder and then render it as a response. from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html") If you want to add values to the template context, as seen in some of the examples in this article, you would have to add non-positional arguments to render_template: from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html", my_age=28) In the preceding example, my_age would be available in the index.html context, where {{ my_age }} would be translated to 28. my_age could have virtually any value you want to exhibit, actually. Now, what if you want all your views to have a specific value in their context, like a version value—some special code or function; how would you do it? Flask offers you the context_processor decorator to accomplish that. You just have to annotate a function that returns a dictionary and you're ready to go. For example: from flask import Flask, render_response app = Flask(__name__)   @app.context_processor def luck_processor(): from random import randint def lucky_number():    return randint(1, 10) return dict(lucky_number=lucky_number)   @app.route("/") def hello(): # lucky_number will be available in the index.html context by default return render_template("index.html") Summary In this article, we saw how to render templates using only Jinja2, how control statements look and how to use them, how to write a comment, how to print variables in a template, how to write and use macros, how to load and use extensions, and how to register context processors. I don't know about you, but this article felt like a lot of information! I strongly advise you to run the experiment with the examples. Knowing your way around Jinja2 will save you a lot of headaches. Resources for Article: Further resources on this subject: Recommender systems dissected Deployment and Post Deployment [article] Handling sessions and users [article] Introduction to Custom Template Filters and Tags [article]
Read more
  • 0
  • 0
  • 6989
article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,bill@williamrice.net,Bill,Binky,open-enrollmentmoodlers moodler_2,rose@williamrice.net,Rose,Krial,open-enrollmentmoodlers moodler_3,jeff@williamrice.net,Jeff,Marco,open-enrollmentmoodlers moodler_4,dave@williamrice.net,Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,bill@williamrice.net,Bill,Binky,history101,odds,science101 moodler_2,rose@williamrice.net,Rose,Krial,history101,even,science101 moodler_3,jeff@williamrice.net,Jeff,Marco,history101,odds,science101 moodler_4,dave@williamrice.net,Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 10261

article-image-json-jsonnet
Packt
25 Jun 2015
16 min read
Save for later

JSON with JSON.Net

Packt
25 Jun 2015
16 min read
In this article by Ray Rischpater, author of the book JavaScript JSON Cookbook, we show you how you can use strong typing in your applications with JSON using C#, Java, and TypeScript. You'll find the following recipes: How to deserialize an object using Json.NET How to handle date and time objects using Json.NET How to deserialize an object using gson for Java How to use TypeScript with Node.js How to annotate simple types using TypeScript How to declare interfaces using TypeScript How to declare classes with interfaces using TypeScript Using json2ts to generate TypeScript interfaces from your JSON (For more resources related to this topic, see here.) While some say that strong types are for weak minds, the truth is that strong typing in programming languages can help you avoid whole classes of errors in which you mistakenly assume that an object of one type is really of a different type. Languages such as C# and Java provide strong types for exactly this reason. Fortunately, the JSON serializers for C# and Java support strong typing, which is especially handy once you've figured out your object representation and simply want to map JSON to instances of classes you've already defined. We use Json.NET for C# and gson for Java to convert from JSON to instances of classes you define in your application. Finally, we take a look at TypeScript, an extension of JavaScript that provides compile-time checking of types, compiling to plain JavaScript for use with Node.js and browsers. We'll look at how to install the TypeScript compiler for Node.js, how to use TypeScript to annotate types and interfaces, and how to use a web page by Timmy Kokke to automatically generate TypeScript interfaces from JSON objects. How to deserialize an object using Json.NET In this recipe, we show you how to use Newtonsoft's Json.NET to deserialize JSON to an object that's an instance of a class. We'll use Json.NET because although this works with the existing .NET JSON serializer, there are other things that I want you to know about Json.NET, which we'll discuss in the next two recipes. Getting ready To begin, you need to be sure you have a reference to Json.NET in your project. The easiest way to do this is to use NuGet; launch NuGet, search for Json.NET, and click on Install, as shown in the following screenshot: You'll also need a reference to the Newonsoft.Json namespace in any file that needs those classes with a using directive at the top of your file: usingNewtonsoft.Json; How to do it… Here's an example that provides the implementation of a simple class, converts a JSON string to an instance of that class, and then converts the instance back into JSON: using System; usingNewtonsoft.Json;   namespaceJSONExample {   public class Record {    public string call;    public double lat;    public double lng; } class Program {    static void Main(string[] args)      {        String json = @"{ 'call': 'kf6gpe-9',        'lat': 21.9749, 'lng': 159.3686 }";          var result = JsonConvert.DeserializeObject<Record>(          json, newJsonSerializerSettings            {        MissingMemberHandling = MissingMemberHandling.Error          });        Console.Write(JsonConvert.SerializeObject(result));          return;        } } } How it works… In order to deserialize the JSON in a type-safe manner, we need to have a class that has the same fields as our JSON. The Record class, defined in the first few lines does this, defining fields for call, lat, and lng. The Newtonsoft.Json namespace provides the JsonConvert class with static methods SerializeObject and DeserializeObject. DeserializeObject is a generic method, taking the type of the object that should be returned as a type argument, and as arguments the JSON to parse, and an optional argument indicating options for the JSON parsing. We pass the MissingMemberHandling property as a setting, indicating with the value of the enumeration Error that in the event that a field is missing, the parser should throw an exception. After parsing the class, we convert it again to JSON and write the resulting JSON to the console. There's more… If you skip passing the MissingMember option or pass Ignore (the default), you can have mismatches between field names in your JSON and your class, which probably isn't what you want for type-safe conversion. You can also pass the NullValueHandling field with a value of Include or Ignore. If Include, fields with null values are included; if Ignore, fields with Null values are ignored. See also The full documentation for Json.NET is at http://www.newtonsoft.com/json/help/html/Introduction.htm. Type-safe deserialization is also possible with JSON support using the .NET serializer; the syntax is similar. For an example, see the documentation for the JavaScriptSerializer class at https://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer(v=vs.110).aspx. How to handle date and time objects using Json.NET Dates in JSON are problematic for people because JavaScript's dates are in milliseconds from the epoch, which are generally unreadable to people. Different JSON parsers handle this differently; Json.NET has a nice IsoDateTimeConverter that formats the date and time in ISO format, making it human-readable for debugging or parsing on platforms other than JavaScript. You can extend this method to converting any kind of formatted data in JSON attributes, too, by creating new converter objects and using the converter object to convert from one value type to another. How to do it… Simply include a new IsoDateTimeConverter object when you call JsonConvert.Serialize, like this: string json = JsonConvert.SerializeObject(p, newIsoDateTimeConverter()); How it works… This causes the serializer to invoke the IsoDateTimeConverter instance with any instance of date and time objects, returning ISO strings like this in your JSON: 2015-07-29T08:00:00 There's more… Note that this can be parsed by Json.NET, but not JavaScript; in JavaScript, you'll want to use a function like this: Function isoDateReviver(value) { if (typeof value === 'string') { var a = /^(d{4})-(d{2})-(d{2})T(d{2}):(d{2}):(d{2}(?:.d*)?)(?:([+-])(d{2}):(d{2}))?Z?$/ .exec(value); if (a) {      var utcMilliseconds = Date.UTC(+a[1],          +a[2] - 1,          +a[3],          +a[4],          +a[5],          +a[6]);        return new Date(utcMilliseconds);    } } return value; } The rather hairy regular expression on the third line matches dates in the ISO format, extracting each of the fields. If the regular expression finds a match, it extracts each of the date fields, which are then used by the Date class's UTC method to create a new date. Note that the entire regular expression—everything between the/characters—should be on one line with no whitespace. It's a little long for this page, however! See also For more information on how Json.NET handles dates and times, see the documentation and example at http://www.newtonsoft.com/json/help/html/SerializeDateFormatHandling.htm. How to deserialize an object using gson for Java Like Json.NET, gson provides a way to specify the destination class to which you're deserializing a JSON object. Getting ready You'll need to include the gson JAR file in your application, just as you would for any other external API. How to do it… You use the same method as you use for type-unsafe JSON parsing using gson using fromJson, except you pass the class object to gson as the second argument, like this: // Assuming we have a class Record that looks like this: /* class Record { private String call; private float lat; private float lng;    // public API would access these fields } */   Gson gson = new com.google.gson.Gson(); String json = "{ "call": "kf6gpe-9", "lat": 21.9749, "lng": 159.3686 }"; Record result = gson.fromJson(json, Record.class); How it works… The fromGson method always takes a Java class. In the example in this recipe, we convert directly to a plain old Java object that our application can use without needing to use the dereferencing and type conversion interface of JsonElement that gson provides. There's more… The gson library can also deal with nested types and arrays as well. You can also hide fields from being serialized or deserialized by declaring them transient, which makes sense because transient fields aren't serialized. See also The documentation for gson and its support for deserializing instances of classes is at https://sites.google.com/site/gson/gson-user-guide#TOC-Object-Examples. How to use TypeScript with Node.js Using TypeScript with Visual Studio is easy; it's just part of the installation of Visual Studio for any version after Visual Studio 2013 Update 2. Getting the TypeScript compiler for Node.js is almost as easy—it's an npm install away. How to do it… On a command line with npm in your path, run the following command: npm install –g typescript The npm option –g tells npm to install the TypeScript compiler globally, so it's available to every Node.js application you write. Once you run it, npm downloads and installs the TypeScript compiler binary for your platform. There's more… Once you run this command to install the compiler, you'll have the TypeScript compiler tsc available on the command line. Compiling a file with tsc is as easy as writing the source code and saving in a file that ends in .ts extension, and running tsc on it. For example, given the following TypeScript saved in the file hello.ts: function greeter(person: string) { return "Hello, " + person; }   var user: string = "Ray";   console.log(greeter(user)); Running tschello.ts at the command line creates the following JavaScript: function greeter(person) { return "Hello, " + person; }   var user = "Ray";   console.log(greeter(user)); Try it! As we'll see in the next section, the function declaration for greeter contains a single TypeScript annotation; it declares the argument person to be string. Add the following line to the bottom of hello.ts: console.log(greeter(2)); Now, run the tschello.ts command again; you'll get an error like this one: C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2082: Supplied parameters do not match any signature of call target:        Could not apply type 'string' to argument 1 which is         of type 'number'. C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2087: Could not select overload for 'call' expression. This error indicates that I'm attempting to call greeter with a value of the wrong type, passing a number where greeter expects a string. In the next recipe, we'll look at the kinds of type annotations TypeScript supports for simple types. See also The TypeScript home page, with tutorials and reference documentation, is at http://www.typescriptlang.org/. How to annotate simple types using TypeScript Type annotations with TypeScript are simple decorators appended to the variable or function after a colon. There's support for the same primitive types as in JavaScript, and to declare interfaces and classes, which we will discuss next. How to do it… Here's a simple example of some variable declarations and two function declarations: function greeter(person: string): string { return "Hello, " + person; }   function circumference(radius: number) : number { var pi: number = 3.141592654; return 2 * pi * radius; }   var user: string = "Ray";   console.log(greeter(user)); console.log("You need " + circumference(2) + " meters of fence for your dog."); This example shows how to annotate functions and variables. How it works… Variables—either standalone or as arguments to a function—are decorated using a colon and then the type. For example, the first function, greeter, takes a single argument, person, which must be a string. The second function, circumference, takes a radius, which must be a number, and declares a single variable in its scope, pi, which must be a number and has the value 3.141592654. You declare functions in the normal way as in JavaScript, and then add the type annotation after the function name, again using a colon and the type. So, greeter returns a string, and circumference returns a number. There's more… TypeScript defines the following fundamental type decorators, which map to their underlying JavaScript types: array: This is a composite type. For example, you can write a list of strings as follows: var list:string[] = [ "one", "two", "three"]; boolean: This type decorator can contain the values true and false. number: This type decorator is like JavaScript itself, can be any floating-point number. string: This type decorator is a character string. enum: An enumeration, written with the enum keyword, like this: enumColor { Red = 1, Green, Blue }; var c : Color = Color.Blue; any: This type indicates that the variable may be of any type. void: This type indicates that the value has no type. You'll use void to indicate a function that returns nothing. See also For a list of the TypeScript types, see the TypeScript handbook at http://www.typescriptlang.org/Handbook. How to declare interfaces using TypeScript An interface defines how something behaves, without defining the implementation. In TypeScript, an interface names a complex type by describing the fields it has. This is known as structural subtyping. How to do it… Declaring an interface is a little like declaring a structure or class; you define the fields in the interface, each with its own type, like this: interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686};   printLocation(myObj); How it works… The interface keyword in TypeScript defines an interface; as I already noted, an interface consists of the fields it declares with their types. In this listing, I defined a plain JavaScript object, myObj and then called the function printLocation, that I previously defined, which takes a Record. When calling printLocation with myObj, the TypeScript compiler checks the fields and types each field and only permits a call to printLocation if the object matches the interface. There's more… Beware! TypeScript can only provide compile-type checking. What do you think the following code does? interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686}; printLocation(myObj);   var json = '{"call":"kf6gpe-7","lat":21.9749}'; var myOtherObj = JSON.parse(json); printLocation(myOtherObj); First, this compiles with tsc just fine. When you run it with node, you'll see the following: kf6gpe-7: 21.9749, 159.3686 kf6gpe-7: 21.9749, undefined What happened? The TypeScript compiler does not add run-time type checking to your code, so you can't impose an interface on a run-time created object that's not a literal. In this example, because the lng field is missing from the JSON, the function can't print it, and prints the value undefined instead. This doesn't mean that you shouldn't use TypeScript with JSON, however. Type annotations serve a purpose for all readers of the code, be they compilers or people. You can use type annotations to indicate your intent as a developer, and readers of the code can better understand the design and limitation of the code you write. See also For more information about interfaces, see the TypeScript documentation at http://www.typescriptlang.org/Handbook#interfaces. How to declare classes with interfaces using TypeScript Interfaces let you specify behavior without specifying implementation; classes let you encapsulate implementation details behind an interface. TypeScript classes can encapsulate fields or methods, just as classes in other languages. How to do it… Here's an example of our Record structure, this time as a class with an interface: class RecordInterface { call: string; lat: number; lng: number;   constructor(c: string, la: number, lo: number) {} printLocation() {}   }   class Record implements RecordInterface { call: string; lat: number; lng: number; constructor(c: string, la: number, lo: number) {    this.call = c;    this.lat = la;    this.lng = lo; }   printLocation() {    console.log(this.call + ': ' + this.lat + ', ' + this.lng); } }   var myObj : Record = new Record('kf6gpe-7', 21.9749, 159.3686);   myObj.printLocation(); How it works… The interface keyword, again, defines an interface just as the previous section shows. The class keyword, which you haven't seen before, implements a class; the optional implements keyword indicates that this class implements the interface RecordInterface. Note that the class implementing the interface must have all of the same fields and methods that the interface prescribes; otherwise, it doesn't meet the requirements of the interface. As a result, our Record class includes fields for call, lat, and lng, with the same types as in the interface, as well as the methods constructor and printLocation. The constructor method is a special method called when you create a new instance of the class using new. Note that with classes, unlike regular objects, the correct way to create them is by using a constructor, rather than just building them up as a collection of fields and values. We do that on the second to the last line of the listing, passing the constructor arguments as function arguments to the class constructor. See also There's a lot more you can do with classes, including defining inheritance and creating public and private fields and methods. For more information about classes in TypeScript, see the documentation at http://www.typescriptlang.org/Handbook#classes. Using json2ts to generate TypeScript interfaces from your JSON This last recipe is more of a tip than a recipe; if you've got some JSON you developed using another programming language or by hand, you can easily create a TypeScript interface for objects to contain the JSON by using Timmy Kokke's json2ts website. How to do it… Simply go to http://json2ts.com and paste your JSON in the box that appears, and click on the generate TypeScript button. You'll be rewarded with a second text-box that appears and shows you the definition of the TypeScript interface, which you can save as its own file and include in your TypeScript applications. How it works… The following figure shows a simple example: You can save this typescript as its own file, a definition file, with the suffix .d.ts, and then include the module with your TypeScript using the import keyword, like this: import module = require('module'); Summary In this article we looked at how you can adapt the type-free nature of JSON with the type safety provided by languages such as C#, Java, and TypeScript to reduce programming errors in your application. Resources for Article: Further resources on this subject: Playing with Swift [article] Getting Started with JSON [article] Top two features of GSON [article]
Read more
  • 0
  • 0
  • 4590
Modal Close icon
Modal Close icon